Google I/O 2017: All the announcements
Google's annual dev summit covered Android O, Google VR and artificial intelligence
Google I/O, the company's annual developer conference at its Mountain View, CA headquarters, has been and gone, with a slew of product announcements across a number of key areas.
Unsurprisingly, AI was a strong focus of Google's show. The company was keen to crow about the successes of its machine learning research, including advances it has made in voice and image recognition.
One of the major announcements the company made was that it will be creating a new cross-company division, called Google.ai, which will focus on applied AI, AI research and tools for creating and developing AI technologies.
Machine learning and AI tools will be coming to mobile too, with the announcement of TensorFlow Lite, a specialised version of Google's machine learning toolkit designed for Android development.
Google has also unveiled the next generation of its Tensor Processing Units - specialised compute hardware used in machine learning tasks. Offering up to 180 teraflops of compute power, these new TPUs will be available through Google Cloud, allowing researchers and developers to use them to develop their own machine learning algorithms.
Google Assistant & Google Home
Many of these advances have been brought to the Google Assistant, the cloud-based AI brain that powers voice-based interactions on devices like the Google Pixel. Advances in machine learning are making the assistant more conversational in its speech, and a forthcoming Google Assistant SDK will let OEMs and hardware manufacturers build Google Assistant into devices like speakers and tablets - a clear swing at the rash of Alexa and Cortana-enabled speakers that have recently been announced.
The Google Assistant has also been brought to iOS, and is rolling out in a number of new languages, including French, German and Japanese. Google is also adding the ability to type interactions with the Assistant, rather than using your voice.
In addition, the Assistant will feature integrations with the company's new Bixby-style machine vision app, Google Lens. Like Samsung's digital assistant, this will allow you to point the camera at something and get information about it, such as translations for foreign menus. It will also link to other services - take a photo of a concert ad, for example, and you'll be able to book tickets, listen to tracks and put the event in your calendar.
Another important feature is the addition of Actions on Google to smartphones. The equivalent of Alexa's 'skills', this feature lets users interact with third-party apps via the Assistant. Actions on Google now also supports transactions. This includes not just paying for goods and services, but tasks like creating accounts and exchanging user details.
This functionality allows users to place orders from restaurants without creating an account or inputting any information. Instead, the Assistant will pull the relevant details from information you've previously shared with Google.
Google Home received substantial updates too, including hands-free calling, support for new streaming services, and the ability to display relevant information on devices such as a phone or TV. Check out our Google Home hub or our Google Home review to find out more about the new features.
In This Article
Transform the operator experience with enhanced automation & analytics
Bring networking into the digital eraDownload now
Artificially intelligent data centres
How the C-Suite is embracing continuous change to drive valueDownload now
Deliver secure automated multicloud for containers with Red Hat and Juniper
Learn how to get started with the multicloud enabler from Red Hat and JuniperDownload now
Get the best out of your workforce
7 steps to unleashing their true potential with robotic process automationDownload now