How GPUs can boost AI for your business

A Gigabyte G482-Z54 server

Artificial intelligence (AI) and machine learning (ML) have been causing a revolution in computing. Once the realm of science and advanced research, the abilities of AI and ML to provide automation, optimisation and insights is rapidly filtering down to every level of business. One of the biggest reasons for this is how fast GPUs have developed as a platform to accelerate AI and ML at a much lower cost.

The two main processing powerhouses in computing are CPUs and GPUs. While CPUs are still king when it comes to everyday business activities, anything that involves high-volume repetitive calculation is much more the GPU’s forte. The primary application for this used to be rendering the visuals for games, and then also the associated physics simulation calculations. These applications involve lots of very similar operations iterated rapidly in succession.

As GPUs increased in power, however, primarily to drive the voracious needs of ever more intensive games titles, their ability to provide more GPGPU (general purpose graphics processing unit) compute has expanded. This GPGPU capability has been harnessed for AI and ML-enhanced tasks like computational fluid dynamics and scientific simulation. These applications also involve intensive, high-volume calculation of functions across massive datasets.

As GPUs got more powerful, their usefulness in business has increased exponentially. AMD’s latest Instinct MI100 HPC GPU, for example, now provides 23.1 teraflops of single precision (FP32) and 11.5 teraflops of double precision (FP64) processing, considerably more than Nvidia’s Ampere A100 with 25% less power consumption.

AMD is also now providing its HIP environment, which allows code to be ported to AMD hardware directly from CUDA calls. This has allowed existing CUDA-based GPGPU applications to be run on AMD Instinct GPUs via the latter’s ROCm environment, including the incredibly powerful AMD Instinct MI100. This can be accomplished without any loss in code efficiency, and has made AMD Instinct GPUs a very credible option for industry-standard ML platforms such as TensorFlow.

RELATED RESOURCE

Trusted AI 101

A guide to building trustworthy and ethical AI systems

FREE DOWNLOAD

In fact, AMD Instinct GPUs are so capable that they are featured in the forthcoming Frontier and El Capitan supercomputers. The unprecedented GPGPU performance with frugal power consumption and ease of programming custom applications have made AMD Instinct a favourite in the scientific community. Oak Ridge National Laboratory, for example, has been harnessing AMD Instinct MI100 GPUs to ready galactic evolution and plasma physics simulations for Frontier, with excellent results.

The immense compute power of a GPU like the AMD Instinct MI100 is not just for exascale HPC science applications, however. The cost of individual GPUs and the ability to install multiple units in a single rack server have brought this technology to a much wider audience. Gigabyte’s G482-Z54 4U rack server, for example, can partner dual AMD EPYCTM 7003 series processors with up to eight PCI Express Gen4 GPGPU cards, including the AMD Instinct MI100.

With triple-redundant 2,200W power supplies and enough cooling to keep all those GPUs at the optimum temperature, this server platform can deliver compute performance in a 4U format. A single server populated with eight AMD Instinct MI100 GPUs can produce a whopping 92 teraflops of double precision processing, and an industry-standard rack as much as 920 teraflops.

This kind of commodification of GPGPU capabilities means that businesses can now look to employ AI and ML for many more applications. These include facial recognition in computer vision, for security or office area access. Retail outlets are now starting to roll out more automated shopping that tracks products and customers so that checkout can be performed automatically. Any application involving linguistic communication can have NLP applied to deliver insights and moderation, such as in social media platforms.

However, the most widespread application for every business will be operational analytics. In particular, AI and ML can work on corporate data to provide real-time insights and optimisation. A retail business can gain live information on product usage within and across stores, allowing it to streamline the supply chain dynamically for maximum efficiency. Big data supplied by the burgeoning number of Internet of Things devices, perhaps monitoring the operation of machinery, can provide predictive maintenance information to catch faults before they cause an outage.

Big data, HPC and cloud computing are starting to dominate IT, providing the bedrock for AI and ML processing. A solid hardware platform such as that provided by the Gigabyte G482-Z54 can deliver the CPU power and GPU density to meet the demands, enabling more companies to deploy this technology on their everyday workloads.

We are only at the beginning of the AI/ML journey. IDC expects the AI market to grow by 16.4% in 2021, and have a compound annual growth rate of 17.5% by 2024. As the market grows, the applications will filter down to ever more day-to-day business activities. At the core of this newly widening ability is the rapid improvement in GPU power. With the right server platform alongside the highest performing and most cost-effective GPUs, businesses can take advantage of the AI/ML revolution. That way, they can improve operational ability, build insights right into the core of their activities, and radically improve efficiency.

Find out how the Gigabyte G482-Z54 can take your organisation’s AI to the next level

ITPro

ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it's cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.

For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.