At Google I/O 2017, a few months on from Google Assistant‘s original launch, the company showcased a bunch of new tricks that it’ll be able to pull off in the near future. Here’s what you need to know.
Whether you’ve already been interacting with the Google Assistant via the company’s Pixel and Pixel XL smartphones, the Google Home connected speaker or via the Allo app, people are becoming more comfortable with the idea of talking to their tech and Google think that its Assistant is getting better at talking back.
Whilst it’s already comfortable interpreting simple queries phrased using natural language and able to continue conversations with follow-up responses (to a point), so far it’s been a pretty generic experience. Google is now looking to add voice recognition so that multiple users (such as family members) could all ask the same Assistant the same question and receive a different answer depending on who they are.
Typing on mobile
A small but important addition, mobile users will soon be able to type queries to the Google Assistant directly, much like what is already possible with Apple’s Siri or Cortana on Windows devices. Presently voice is the primary way to interact with it and in some situations, that’s just not appropriate.
It’ll be able to see
Google Lens is another initiative Google spoke about at I/O 2017 that focuses on computer vision designed to make sense of the world around you. When paired with the Google Assistant, users will be able to hold their smartphones up to signage in a different language or various objects and it will decipher them, with the ability to continue the conversation with context as before, once it has.
Read next: Google Lens: What is it and what can it do?
iPhone users will no longer be slaves to Siri. Whilst other virtual assistants are already available on iOS, only Google’s will be able to tap into the data found on the company’s existing iOS-compatible app suite. You’ll be able to do almost anything you would on any existing Google Assistant-capable device, including controlling your smart home appliances with just your voice.
Google Assistant everywhere
If there’s on thing Amazon got right in the early stages of Alexa, it was making sure developers could put the speech engine and smarts behind its voice assistant anywhere on almost anything. Nowadays you’ll find Alexa-enabled devices all over the shop, with more cropping up all the time.
Naturally, Google can’t stand to have one of its most significant rivals have all the fun and so I/O 2017 played host to the launch of the Google Assistant SDK. The tools will let developers build the Google Assistant experience into all manner of software and hardware but Google is also pursuing new in-house solutions by teaming up with prominent third-parties like Samsung, Sony and LG who’ll all start pushing out products with the ‘Google Assistant built-in’ badge somewhere on their packaging before the year’s end.
On stage, Google’s Scott Huffman also confirmed that the Assistant has been studying up on its linguistics, with the promise of French, German, Brazilian Portuguese and Japanese support in time for summer and Italian, Spanish and Korean becoming available by the year’s end for both Android and iOS users.
Assistant Actions on mobile
Since the arrival of Google Home, which features the Google Assistant at the core of the user experience, there’s been a notable divide in the functionality offered up by the Assistant in the company’s connected speaker and the one in its smartphones.
Third-party Actions (similar to Amazon’s Alexa’s skills) were all but exclusively accessible from the Google Home experience. Such abilities have now been confirmed to be en route to the mobile iteration of the Assistant too, meaning you’ll be able to control your home’s lighting or, thanks to the addition of transaction support too, buy cinema tickets all using your voice on your phone.