Gluon provides machine learning for AWS and Azure
Amazon and Microsoft unveil artificial intelligence ecosystem
Amazon and Microsoft have launched a new open-source deep learning interface called Gluon.
The artificial intelligence system has been jointly developed by the companies to let developers "prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps", according to the companies.
Dr Matt Wood, general manager of Deep Learning and AI at AWS, said that Gluon provides a clear, concise API (application programming interface) for defining machine learning models using a collection of pre-built, optimised neural network components.
"Developers who are new to machine learning will find this interface more familiar to traditional code since machine learning models can be defined and manipulated just like any other data structure. More seasoned data scientists and researchers will value the ability to build prototypes quickly and utilize dynamic neural network graphs for entirely new model architectures, all without sacrificing training speed," he said.
Eric Boyd, CVP of AI Data and Infrastructure at Microsoft, said that Gluon could be used with either Apache MXNet or Microsoft Cognitive Toolkit, and will be supported in all Azure services, tools and infrastructure.
"Gluon offers an easy-to-use interface for developers, highly-scalable training, and efficient model evaluationall without sacrificing flexibility for more experienced researchers," he added.
Gluon will support symbolic and imperative programming, something that is not found in other toolkits, claimed Microsoft. It will also include fully symbolic automatic differentiation of code that has been procedurally executed including control flow. This is achieved from hybridisation: static compute graphs are computed the first time and then cached and reused for subsequent iterations. The compute graphs can also be exported, e.g., for serving on mobile devices, said Boyd.
There is also a built-in layers library that simplifies the task of defining complex model architectures through reuse of the pre-built building blocks from the library.
As well as this, there is native support for loops and ragged tensors (batching variable length sequences) which translates into execution efficiency for RNN and LSTM models. It also supports sparse and quantised data and operations, both for computation and communication. Gluon also provides advanced scheduling on multiple GPUs.
"This is another step in fostering an open AI ecosystem to accelerate innovation and democratisation of AI-making it more accessible and valuable to all," said Boyd. "With Gluon, developers will be able to deliver new and exciting AI innovations faster by using a higher-level programming model and the tools and platforms they are most comfortable with."
Top 5 challenges of migrating applications to the cloud
Explore how VMware Cloud on AWS helps to address common cloud migration challengesDownload now
3 reasons why now is the time to rethink your network
Changing requirements call for new solutionsDownload now
All-flash buyer’s guide
Tips for evaluating Solid-State ArraysDownload now
Enabling enterprise machine and deep learning with intelligent storage
The power of AI can only be realised through efficient and performant delivery of dataDownload now