The yearly Google I/O conference is the year’s biggest event of the year so far. Artificial intelligence was unquestionably the headliner of the program at Google I/O 2018, like Google Assistant which will perform an even more central position in Google’s ecosystem than it already has over the past few years. The audience got the first glimpse of the newly updated version of Android P, which is available to developers from the day of this presentation. The company covered all that and more during its 90-minute Google I/O 2018 keynote presentation, and we have tried to sum up the 10 biggest announcements.
This new feature of Gmail that utilizes machine intelligence do not just contains prophesies words but complete phrases. And these just doesn’t include simple predictions like addresses, but an entire phrase that is recommended based on context and user history.
Google Photos is taking a ton of new innovations based on artificial intelligence and machine learning. For example, Google Photos can take an old wrecked black and white photo and not just converts it to color, but converts it to realistic color and touch it up in the process.
The first Google Assistant voice was titled Holly, and it was based on actual recordings. Moving forward, Google Assistant will get six new voices. including John Legend! Google is using WaveNet to make voices more practical, and it hopes to eventually perfect all accents and languages around the world. Google Assistant will serve 30 different languages by the end of 2018.
Google is doing lots of upgrades to Google Assistant to improve spontaneous conversation. For one, conversations can continue following an initial wake command,“Hey Google”. The new feature that’s being added enables continued conversation and it’ll be ready soon.
Multiple Actions support is coming to Google Assistant as well, allowing Google Assistant to handle multiple commands at one time.
Another new feature called “Pretty Please” will help young children learn politeness by responding with positive reinforcement when children say please. The feature will roll out later this year.
The initial Smart Displays will be released in July, powered by Google Assistant. In order to power the activities provided by Smart Displays, Google had to whip up a new visual interface for Assistant.
The new visual ideas include mixed visual and voice interactions that almost go full-screen when you ask certain questions. For example, you can ask Assistant to adjust the temperature in your living room and it shows you with an interactive thermostat to extra tune the thermostat.
Google Assistant can be a real assistant using text to speech, deep learning, AI, etc. at the time of the demo they did a real appointment which was entirely done by Google Assistant.
Google is striving hard to increase both business and consumers. An initial version of the service that will call businesses to get store hours will roll out in the coming weeks, and the data obtained will allow Google to update open and close hours under company profiles online.
Google News is preparing a revamp that concentrates on highlighting feature journalism. The redo will make it easier for users to keep up with the news by showing a briefing at the top with five important stories. Local news will be highlighted as well, and the Google News app will constantly evolve and learn a user’s preferences as he or she uses the app.
Videos from YouTube and elsewhere will be showcased more prominently, and a new feature called Newscasts are like Instagram stories, but for news.
The refreshed Google News will also take steps to help users understand the full scope of a story, showcasing a variety of sources and formats. The new feature, which is called “Full Coverage”, will also help by providing related stories, background, timelines of key related events, and more.
Finally, a new Newsstand section lets users follow specific publications, and they can even subscribe to paid news services right inside the app. Paid subscriptions will make content available not just in the Google News app, but on the publisher’s website and elsewhere as well.
The updated Google News app is rolling out on the web, iOS, and Android and it will be completely rolled out soon.
Google had already released the first build of Android P for developers, but on Tuesday the company discussed a number of new Android P features that fall into three core categories.
Google partnered with DeepMind to create a trait called Adaptive Battery. It uses computer learning to discover which apps you use regularly and which ones you use the least, and it reduces background processes for seldom used apps in order to save battery life.
Another new feature called Adaptive Brightness learns a user’s brightness choices in different ambient lighting situations to enhance auto-brightness settings.
App Actions is a new feature in Android P that foretells actions based on a user’s usage patterns. It encourages users to get their next task instantly. For example, if you search for a movie in Google, you might get an App Action that yields to open Fandango so you can buy tickets.
Slice is a different new feature that enables developers to take a small piece of their app — or “slice” — that can be performed in various places.
Google wants to help technology fade into the background so that it gets out of the user’s way.
First, Android P’s navigation has been improved. Swipe up on a small home button at the bottom and a new app switcher will open. Swipe up again and the app drawer will start. The new app switcher is now horizontal, and it looks a lot like the iPhone app switcher in iOS 11.
Also appreciated is a new rotation button that lets users choose which apps can auto-rotate and which ones cannot.
Android P serves some important changes to Android that focus on well-being.
There’s a new dashboard that shows users exactly how they spent their day on their phone. It will display which apps you use and for how long, and it gives other valuable information as well. Directions will be available to help users check the amount of time they spend on some apps.
A magnified Do Not Disturb mode will stop visual information as well as audio notifications and vibrations. There is also a new “silent-shush” innovation that automatically approves Do Not Disturb when a phone is turned face down on a table. Important contacts will not be checked out when the new Do Not Disturb mode is enabled.
There is also a new wind-down form that dims the display to grayscale when someone uses his or her phone late at night before bed.
Google declared a new Android P Beta program just like Apple’s public iOS beta program. It lets end users try Android P on their phones now.
A new “For You” tab in Google Maps reveals you new businesses in your region as well as restaurants that are trending around you, etc. Google also added a new “Your Match” score to display the likelihood of you liking a new restaurant based on your historical ratings.
The new restaurant grouping feature is exciting. To pick a restaurant when you are in a group, a long-press on any restaurant will add it to a new short list, and you can then share that list with friends. They can add other options, and the group can then pick a restaurant from the group list.
A little further, Google is trying to bring on an appealing new feature that blends computer vision courtesy of the camera with Google Maps Street View to create an AR experience in Google Maps.
Google Lens is also coming to additional devices this year, and there are new features coming as well.
The lens can now understand words, and you can copy and paste words on a sign or a piece of paper to the phone’s clipboard. You can also get the context, for example, Google Lens can detect ingredients in a dish on a menu and tell you!!!
New shopping features let you point your camera at an item to get prices and reviews. And finally, Google Lens now works in real time to constant scan items in the camera frame to give information and, soon, to overlay live results on items in your camera’s view.
Right before the keynote opened, the company declared it was rebranding its Google Research division to Google AI, therefore highlighting the expanded focus on neural networks, machine learning, and natural language processing.