Software enthusiasts who are invested in Google’s ecosystem look forward to the software giant’s annual developer conference, Google I/O. This is where we can get a glimpse at what new technologies are being finalized, as well as what we can expect from the next version of Android – just currently going by the next letter of its sweet alphabet, Android “P”.
Google’s an immense company, with its hands in countless jars. It can be overwhelming to keep up, so this Synopsis runs through the biggest and most exciting features coming our way.
Google Assistant becoming more real
Google Assistant is no doubt a valuable feature. Vocally, we’re often able to complete a task faster. And being that Google’s running the show, it’s continually getting more capable/competent over time.
- Word from I/O 2018 outlines a pretty big step forward for Assistant. Google wants to make interaction more natural. One of the next steps is to make the “conversation” more seamless, so Assistant will now be able to distinguish you talking to it versus someone else. Dubbed Continued Conversation, you’ll only have to initiate the conversation with a traditional opener (i.e. saying “Hey Google”) but not from there on. Just continue talking and Assistant will be listening.
- Being limited to a single inquiry at a time is…robotic. So Google has also worked on Multiple Actions. You can ask for multiple things at once, like asking “How long will it take to get to Seattle and what’s the weather there?”
- Google has worked some technological wizardry to be able to vocally bring other subjects to life, maintaining the vocal subtleties of a recording. You can now choose a favorite Assistant voice from six new options.
- These are all neat updates to Assistant, but the one that takes the cake for the “coolness” factor is definitely Google Duplex. We wouldn’t have thought that a virtual assistant would have this level of competence yet – Duplex gives Assistant the ability to administer phone calls all by itself. Google demoed it making an appointment and restaurant reservation, where upon user command, it will make the call to said place, actually talk to the recipient (even react to their speech), then set it in your calendar upon successful completion.
Progressed Gmail smarts
Chances are you’ve noticed Google’s Smart Reply feature in the Gmail app. This was part of Google’s initiative to build smart software that can save us time. In this case, the system presents a selection of phrases that can suffice as a reply to a particular email, so you just tap instead of write.
This week, we’ve heard that Google is taking the concept a big step forward. Smart Compose is as it sounds – AI is used as you’re writing an email to continually suggest likely the endings to phrases and sentences.
- You can write as you normally would. You’ll see the system’s suggestive ways to end a statement shown in light gray. To accept, press the “tab” button.
- Anyone should be able to see the substantial time this could save. Google has become super efficient at learning users’ preferences and tuning suggestions to be relevant and useful to each person. It can also be contextually intelligent, suggesting a phrase like “Have a great weekend!” when it’s Friday, for example.
- Additionally, if you can think of just the right word or phrase to say, you may not have to dwell on it. Chances can be that Google figures it out. And it could help reduce those pesky spelling and grammar mistakes.
Google Lens is growing
Google’s not the only one that has turned the smartphone camera to an object identifier. But it’s proven to be a hard thing to do. All implementations, including those from Samsung (Bixby Vision) and LG (ThinQ software), are hit or miss. But it’s one of those software tricks that are continually evolving. This is how Google is pushing Lens forward:
- Firstly, Lens support is expanding to other than the Pixel phones. Many smartphones that run a version of Android closer to Google’s vision are getting it, such as LG, Motorola, Sony, and OnePlus, to name a few.
- We’re not the only ones who have wanted to quickly grab a digital copy of text on a printed page, to save valuable information or query a particular word/phrase. Lens can now do this.
- Lens is becoming more competent in not just retrieving what the subject in an image is, but other potentially useful web results. An example is if you find an intriguing item in a store, let Lens have a go and you could get helpful information like reviews, price comparisons, and alternative items of the like that you may not know about.
- Another new feature in Lens is “real time” interaction. As opposed to snapping a single inquiry at a time, you will now be able to just browse around with the camera/viewfinder and Lens will identify objects in real time. Simply click on them to retrieve the info and then move on to the next.
We don’t have to talk about how invaluable Maps is. Google has always been on top of things, thinking of new ways to make it even more useful, and this year’s I/O was no exception.
- Group planning on a restaurant can be a pain, especially if you’re trying to do it virtually. A new feature coming to Maps is the ability to compile a shortlist of places, and then let a group of friends vote on where to go.
- The Explorer function in Maps has been underutilized, but that’s changing now. Google redesigned and expanded on it. When you’re trying to figure out where to eat nearby, scrolling through the Explorer tab will now show top-rated and trending restaurants. Maps will also capture your preferences (i.e. food and drink you like, or your favorite restaurants) to advise if you’ll “match” with a place.
- Explorer has historically been practically limited to restaurants, when it should include nearby insight for every kind of place (stores, attractions, accommodations, etc.). Google is starting to address this, by adding recommendations for events and activities in the area in question.
- This leads us to the “For you” tab. It’s a user-specific section for activities and spots you’ll probably be interested in, based on what Maps learns from you. This will be a great way to help maximize our outings, so we don’t miss a beat.
- Google also let us know they’ve been working on using Maps in conjunction with the smartphone camera for an AR experience. You’ll see real-time visual indication of which way to go (we might even see a virtual guide, like shown in the image below). What’s more, the opportunity will be used to point to points of interest as you pan the camera around.
The new Google News
The mobile Google News will also be undergoing a “For you” initiative. Google recognizes that there’s tons of news and outlets on the interwebz, and you have to dig to get to the content you really value.
- Google sees a big opportunity for AI to help news flow. They have a goal of being able to efficiently gather/organize reportings and all the important information that should be known (not just the happening but the impact, results, and reactions).
- Like in Maps, there’s also a “For you” section of this app. It aggregates news articles tuned for you liking, starting with a mix of important headlines of topics you’re interested in. The app will learn your interests as you use it, and continually get better with feeds over time.
- In contrast, a new section called “Full Coverage” will step away from you and display news that the world is reading, so you don’t miss out on top, important on-goings.
- Newsstand is still around, but will alternatively act as a manually customized approach. You’ll be able to tag your favorite new sources, and even subscribe to them with your Google account. It’ll be also useful for discovering new sources.
More capable Photos
Photos is another Google app that is gaining smarts. You’ve seen that Google’s driving initiative these days revolves around AI case-specific suggestions and saving the user time, and the same is true here.
- Despite the how good smartphone cameras have gotten, we can still get sub-par pics. Google will soon provide quick, one-tap fixes for these common problems below the images, but takes it a step more by tuning the options for each image. The goal is for the system to recognize what’s needed, so the user can simply accept the change and quickly get on with life.
- In addition, Google is adding a couple more common tasks to this new one-tap approach. You’ll be able to more quickly share a photo(s) with multiple people and archive photos to improve organization.
- One of the big features Google has been working on is color tweaking. You’ll be able to isolate a subject in a different and creative way, by showing them in color and the rest of the world in black and white. Google also shared something else it’s working on (but not yet ready) – a way to add color and bring to life old, originally black and white photos.
Last but certainly not least is more insight into new features coming in the Android P build of the mobile OS. There’s several big changes to core functionality, so let’s get to them.
- Battery life is still a sensitive subject in smartphones, and we’ve seen Google trying ways to squeeze more life through software optimization (i.e. Doze). Android P is set to make a big step via Adaptive Battery. Like most things we’ve talked about, the system will pay attention to which apps and services you use the most and prioritize battery supply. They’re also rolling out Adaptive Brightness, which will learn how you adjust your brightness in different conditions and try to automatically replicate it.
- The phone will react to your behavior in a few other ways. App Actions will try to predict your next move upon an execution (popping up the suggested actions), to save you time. For instance, to start playing a radio station upon plugging in your headphones. These actions will pop up in different places throughout the UI.
- Slices is another new feature that tries to save you time. It actually integrates a “slice” of an app with the UI, as something that can pop up in a search, for instance, and be interacted with.
- Google is changing the way we navigate the UI in Android P. It’s a pretty big deal; we haven’t seen a big change like this probably since Material Design first rolled out. The Home button is now shaped like a horizontal line, indicating that it’s a scroll handle. Grabbing onto it and moving left/right throws the system into “Overview” mode, where you horizontally scroll through panels of currently open apps (this replaces the traditional Recent Apps carousel). Swiping up from the handle also opens up Overview, with an additional swipe opening the app drawer. Google also made some iterative tweaks to the quick settings, notification drop-down panel, and volume controls.
- Despite continually packing our phones with more and more capability, Google is starting a “Digital Well-Being” initiative with Android P. A new module called Dashboard will keep track of stats of time you spend on your phone (specifically, how much time on which apps, how many times you’ve unlocked your phone, and how many notifications you’ve received).
- The Do Not Disturb mode is improved with Android P. When on, it will now also cease any visual notifications on the screen. Also, you will be able enable the mode by simply turning the phone over on a table.
- Lastly, a new Wind Down mode will detect when your surrounding gets dark and adjust your phone by: switching to Night Light, turning on Do Not Disturb, and fading the screen to a grayscale, to push you to wind yourself down.
Google is doing some major and complicated things this year. We applaud their ambition, as well as their cohesive effort to make their core apps smarter for better results and to save us time. It’s obvious that well-rounded AI is the ultimate goal, and from what we’ve seen, Google is well on the way to making Skynet a reality.