ARM unveils processor design with dedicated machine learning capabilities

An attempt at making its chips the standard platform for machine learning in mobile and internet of things devices

ARM

Chip designer ARM has announced it is now offering its partners processors with dedicated machine learning capabilities.

Dubbed Project Trillium, the processor is ARM's attempt at making its chips the standard platform for machine learning in mobile and internet of things (IoT) devices and is said to be "the most efficient solution" to run neural networks.

"[Our] Machine Learning processor is an optimised, ground-up design for machine learning acceleration, targeting mobile and adjacent markets," Arm said. "The solution consists of state-of-the-art optimised fixed-function engines to provide best-in-class performance within a constrained power envelope."

The launch of the machine learning chip, aimed at general AI workloads, coincides with that of a fresh object detection chip that specialises in detecting faces, people and their gestures in moving images, even those in full HD and running at up to 60 frames per second.

This is actually the second generation of ARM's object-detection chip; its predecessor ran in Hive's smart security camera. ARM hopes that this updated version will be used by OEMs alongside its machine learning chip to detect faces or objects in an image or video, for example, passing the information on to the machine learning chip, which would then perform the face or image recognition.

Advertisement
Advertisement - Article continues below
Advertisement - Article continues below

ARM also said that the Project Trillium chips feature onboard memory that allows central storage for weights and feature maps, thus reducing traffic to the external memory and, therefore, power.

"[An] additional programmable layer engine supports the execution of non-convolution layers, and the implementation of selected primitives and operators, along with future innovation and algorithm generation," the firm explained, adding that there's also a network control unit which manages the overall execution and "traversal of the network" while the DMA moves data in and out of the main memory.

The firm stressed that the new machine learning chips are not meant for training machine learning models, but instead for running them at the edge. The idea is to offer mobile performance of 4.6 teraops but being so efficient that they only use 3 teraops per watt of power. However, ARM said it expects this could increase with additional optimisations.

Expect to see ARM's new AI-focused chips offered to its partners by summer time, and in the first consumer devices around this time next year. 

Featured Resources

Digitally perfecting the supply chain

How new technologies are being leveraged to transform the manufacturing supply chain

Download now

Three keys to maximise application migration and modernisation success

Harness the benefits that modernised applications can offer

Download now

Your enterprise cloud solutions guide

Infrastructure designed to meet your company's IT needs for next-generation cloud applications

Download now

The 3 approaches of Breach and Attack Simulation technologies

A guide to the nuances of BAS, helping you stay one step ahead of cyber criminals

Download now
Advertisement

Most Popular

Visit/operating-systems/25802/17-windows-10-problems-and-how-to-fix-them
operating systems

17 Windows 10 problems - and how to fix them

13 Jan 2020
Visit/hardware/354584/windows-10-and-the-tools-for-agile-working
Sponsored

Windows 10 and the tools for agile working

20 Jan 2020
Visit/microsoft-windows/32066/what-to-do-if-youre-still-running-windows-7
Microsoft Windows

What to do if you're still running Windows 7

14 Jan 2020
Visit/web-browser/30394/what-is-http-error-503-and-how-do-you-fix-it
web browser

What is HTTP error 503 and how do you fix it?

7 Jan 2020