Google announces new AI platform for developers

Blue figure statue next to white background with NEXT on

Google has launched a beta version of its AI platform, allowing developers, data scientists, and data engineers with an end-to-end development environment in which to collaborate and manage machine learning (ML) projects.

While ML is already employed in many cloud instances to sift through logs looking for data that could indicate malicious activity, Google announced a range of additional capabilities for its AutoML product - the same one introduced last year which aimed to get companies with limited ML know-how building their own business-specific ML products.

"We believe AI will transform every business and every organisation over the course of the next few years," said Rajen Sheth, director product management at Google Cloud AI.

"We have focussed on building an AI platform that provides a very deep understanding of a number of fundamental types of data: voice, language, video, images, text and translation," added Thomas Kurian, CEO Google Cloud. "On top of this platform, we have built a number of solutions to make it easy for our customers and analysts around the world to build products".

When AutoML launched last year, non-experienced workers could build ML-driven tools using image classification, natural language processing and translation specific for their businesses and the data they hold with little-to-no training.

Now Google has announced three new AutoML variations called AutoML Tables, AutoML Video and AutoML Vision for the edge. Tables allows customers to take massive amounts of data, hundreds of terabytes were cited, ingested through BigQuery and use that to create actionable insights into business operations such as predicting business downtime.

It's all codeless, too. Data can be ingested and then fed through custom ML models created in days instead of weeks by developers, analysts or engineers using an intuitive GUI.

With Video, Google is targeting any organisation that hosts videos and needs to either categorise them automatically using automatic labelling such as cat videos or furniture videos. It can also help automatically filter explicit content and help broadcasters, much like it did with ITV recently, detect and manage traffic patterns on live broadcasts.

Vision was announced last year to help developers with image recognition. With Vision Edge for devices such as connected sensors or cameras, the challenge was that these device struggle with latency issues. Vision Edge harnesses edge TPUs for faster inference and LG CNS, an outsourcing arm of LG uses the tool to create manufacturing products that detect issues with things like LCD screens and optical films on the assembly line.

The new AutoML tools will have the ability to take visual data and turn it into structured data, that's according to Sheth speaking at a press conference.

"One example of this is FOX Sports in Australia - they're using this to drive viewer engagement - they're putting in data from a cricket game and using that to predict when a wicket will fall with an amazing amount of accuracy and then it sends a notification out via social media telling followers to come and see it," he said.

Sid Nag, research vice president at Gartner, said that while Google has effectively admitted to being the second cloud provider with the introduction of Anthos, what it is doing well is leading the AI charge.

"They're (Google Cloud) very strong in AI and ML, no-one's doubted that," Nag said in an interview with Cloud Pro. When asked if customers would choose Google Cloud specifically based on AI as its USP, Nag said: "yeah I think so, that and big data and analytics, you know, they've always been very strong in that area".

How are companies benefitting from cloud AI and AutoML?

Binu Mathew, senior vice president and global head of digital products at Baker Hughes, came on stage after Sheth to talk to us about how his team of developers use Google's AI tools in the oil and gas industry specifically.

He said, when an offshore oil platform goes down, it costs the company about $1m per day. However, by using ML, the oil company can teach its ML tools the signs of normal business function so when these figures start to go awry, the issue can be fixed before any costly downtime occurs.

Since using Google's AI tools, Baker Hughes has experienced a 10x improvement in model performance, a 50% reduction in false positive predictions and a 300% reduction in false negatives.

Sheth said that AI will also be part of Kurian's and Google Cloud's hybrid cloud vision, you can deploy ML across GCP, on-prem, on other cloud platforms and at the edge. This is because it runs off Kubeflow, the open-source AI framework that runs anywhere Kubernetes runs too and it can all be managed by Anthos, Google's new multi-cloud platform which "simply put, is the future of cloud", said Urs Hölzle, Google's senior vice president of technical infrastructure.

Speaking at a subsequent and more intimate session compared to the keynote, Marcus East, CTO at National Geographic, told the crowd of the company's cloud transformation and the quick mission-critical turnaround of migrating the company's 20-year-old legacy on-prem photo archive system to a GCP-based archive in just eight weeks.

He also briefly mentioned the company's work with AutoML so Cloud Pro caught up with East and a few of the engineers behind the company's AI work after the event to hear more about the company's vision for cloud AI implementation for the future, specifically with AutoML Vision.

Speaking exclusively to Cloud Pro, Melissa Wiley, vice president of digital products at National Geographic, said that one of the ideas it is exploring is that of advanced automated tagging of metadata and how it will be able to assign labels of not just specific animals, but specific species to animals that appear in the circa two million images it stores in its archive.

That starts by using AutoML Vision's automatic image recognition. Using machine learning, Nat Geo can train its industry-specific ML tool to learn one species of tiger and apply that to identify the same species in all the other photos in which that species appears, according to Wiley.

"When our photographers are out in the field, they might be up to their waist in mud, avoiding mosquitos and being chased by wild creatures - they don't have time to take a great photo and then turn to their laptop and fill in all the metadata," said East. "So this idea that we could somehow use AutoML and the BroadVision API to really [make those connections] and enrich the metadata in those images is the starting point. Once we've done that, we can give our end consumers a better experience."

"That's the next stage for us; we can see the potential to harness the power of these cloud-native capabilities, to build personalised experiences for consumers. For example, we could say we know Connor likes snakes and videos of animals eating animals, let's give him that experience," he added.

Wiley also mentioned the enterprise potential for this too, perhaps offering the technology to schools, libraries or even other companies so Nat Geo can help them identify animals too. "There are a million ideas we could talk about," she said.

ITPro

ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it's cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.

For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.