The many faces of artificial intelligence

How AI has made the leap from science fiction to the mainstream

Artificial intelligence has graduated from science fiction to being applied to almost everything. One dental hygiene manufacturer has even claimed its electric toothbrushes now incorporate the technology.

This may sound ridiculous, but it indicates how AI has entered the mainstream, and in the process has splintered into a range of different, but related technologies. But what are the key types of AI – and how do they all fit together?

The origins of Artificial Intelligence

The first thing to understand is that, although AI was originally the attempt to use computing to replicate the entire human brain process, today’s AI offshoots don’t have such grand aims, or at least not immediately. The three most common terms you will come across are the canonical AI, Machine Learning (ML) and Deep Learning (DL). All three have subsets. Thanks to the infamous Moore’s Law, computing has been getting faster and cheaper every year, meaning that it can now accelerate every type of AI, with applications as diverse as self-driving vehicles, expert systems for detecting cancer more successfully, and facial recognition.

The overarching term for AI is, obviously, AI itself, stemming from work by Alan Turing in the 1950s. The key element here is that a computing system can make its own decisions. The holy grail was “strong AI” or Artificial General Intelligence, which is where AI has the same mental capabilities as human beings. This has proven much more complicated to achieve than first expected, which is reassuring, because closely connected to it is the idea of Artificial Super Intelligence, where AI not only replicates our brains but supersedes them in every way.

More recently, the realisation that General AI was unrealistic with current technology and neurological understanding has resulted in “weak AI”, or Artificial Narrow Intelligence. This is focused on specific tasks, such as winning at chess, voice recognition by Siri or Alexa, aircraft autopilot systems and autonomously driving cars. These systems don’t think like us, but they can make decisions within a clearly defined domain that simulate a subset of our abilities. This kind of technology is already with us and doesn’t necessarily require the most powerful local computing because the model being applied has already been developed centrally, and the local system is making narrow decisions based on that – for example working out whether you said “the” or “tea”.

Machine Learning

Building the models used by Artificial Narrow Intelligence is where Machine Learning comes in. This is a statistical process using learning algorithms to train a system. Unlike traditional programming, ML can continually update itself to build better models as more information is fed in. The process can be supervised, where a known data set with responses is fed in to build the model. With unsupervised learning, ML looks for patterns in data sets itself to find clusters. Reinforcement learning is where data is continuously fed into the system to improve the model. For example, Tesla’s trial of its Full Self-Driving currently taking place in selected US states feeds data back into its autonomous driving system, helping to improve this model for better results and, hopefully, fewer accidents!

Although ML can be performed “at the edge” (i.e. on local computing devices), generally the lion’s share is performed centrally. The process involves small functions performed repeatedly a huge number of times, which is something that GPUs are very good at, because this is also how 3D images are built. A good basis for an ML data centre is therefore a server with support for an extremely high GPU density. When analytics data are parsed by the CPU and calculation tasks are scheduled onto dedicated GPGPU devices for parallel computation, fast interconnects between CPU and GPU and also between GPU’s themselves become crucial. To achieve high-speed interconnect performance within minimised space for maximum computation density, the CPU architecture plays an important role in managing peripheral devices. To this end, AMD is the recognised leader by the market for almost all use cases.

For example, Gigabyte’s G292-Z45  supports dual AMD EPYC™ 7003 series processors and can accommodate up to eight dual-slot GPU cards, and Gigabyte’s G492-ZD2 supports up to eight NVIDIA A100 SXM4 GPUs, which are specifically optimised for ML, DL and other GPU-accelerated HPC workloads.

Silhouette of head with computer chips inside. AMD EPYC logo in bottom left-hand corner

Deep Learning

Deep Learning is a subset of ML, where the AI system learns by example, in a similar way to how most human beings learn. These generally use neural networks, in three fundamental types – convolutional, recurrent, and recursive. Convolutional neural networks are often used in applications like Computer Vision for analysing and classifying images. It assigns weightings to features within the image to differentiate between them.

Recurrent Neural Networks tend to be used in Natural Language Processing (NLP), such as voice recognition or sentiment analysis on social media. They perform the same task for every element of a sequence, but with output dependent on previous results. For example, applying NLP to each word in a sentence is naturally a progression, because the possibilities of what each new word can be is modified by what the last one was. Entering a satnav address by voice recognition can work well because addresses only have a certain range of formats, for example, with town coming after street name, and house number before that.

Recursive neural networks can also be used for this kind of application. They use a system of tensors, which are algebraic objects that describe a multilinear relationship between sets of objects in a vector space. In other words, they are arrays of lots of numbers that relate to other arrays of numbers, so if you apply a transformation, they all change and that can then affect another. Using a recursive system on a language sentence breaks it into chunks which can then be operated on individually, feeding back into the overall understanding.

Performing DL and ML can greatly benefit from starting at the edge, because it can reduce the amount of data sent to a central system. For example, if you had to send video recordings to the data centre for modelling, your network could soon become saturated. It is much better in this case to perform initial modelling at the edge and send the results back to the central server for incorporation into a more global model. This process is aided by powerful edge servers with a small footprint that can be installed locally near the data source, such as Gigabyte’s E152-ZEO, which packs a powerful AMD EPYC™ processor and one dual-width GPU (or two single-width GPU’s) into a 1U rack format.

AI everywhere

Although functions like self-driving cars and facial detection for the assistance of law enforcement have the highest public profile, one of the most ubiquitous applications of AI, ML and DL is predictive analytics. When Google knows what you might be searching for before you do, when Alexa suggests you might want to buy a regular purchase again just when you yourself are thinking of it, and when your sales forecast helps you work out how much of a component to order for your supply chain – that’s predictive analytics at work.

Despite our concerns that AI might turn into SkyNet and destroy humanity, its more focused application through subsets such as ML and DL can provide genuine benefit to all our lives. Now that the hardware to run AI workloads – like that manufactured by Gigabyte – is cheaper, increasingly more powerful and more readily available, applications like predictive analytics can find efficiencies that could be too complex or laborious for human beings. This can pay dividends for company profits. You just need to make sure you use the right type of AI, powered by the most appropriate hardware for the job.

Find out how GIGABYTE – AMD EPYC™-based GPU servers can take your organisation’s AI to the next level

Featured Resources

How virtual desktop infrastructure enables digital transformation

Challenges and benefits of VDI

Free download

The Okta digital trust index

Exploring the human edge of trust

Free download

Optimising workload placement in your hybrid cloud

Deliver increased IT agility with the cloud

Free Download

Modernise endpoint protection and leave your legacy challenges behind

The risk of keeping your legacy endpoint security tools

Download now

Recommended

Industry "delighted" with UK's 'landmark' anti-bias AI standard
artificial intelligence (AI)

Industry "delighted" with UK's 'landmark' anti-bias AI standard

30 Nov 2021
Sophos Intercept X Advanced review: AI-powered protection
endpoint security

Sophos Intercept X Advanced review: AI-powered protection

30 Nov 2021
RateGain’s new SaaS simplifies hotel content management
software as a service (SaaS)

RateGain’s new SaaS simplifies hotel content management

12 Nov 2021
DDN launches AI Innovation Lab in Singapore
artificial intelligence (AI)

DDN launches AI Innovation Lab in Singapore

11 Nov 2021

Most Popular

How to move Microsoft's Windows 11 from a hard drive to an SSD
Microsoft Windows

How to move Microsoft's Windows 11 from a hard drive to an SSD

4 Jan 2022
How to boot Windows 11 in Safe Mode
Microsoft Windows

How to boot Windows 11 in Safe Mode

6 Jan 2022
Microsoft Exchange servers break thanks to 'Y2K22' bug
email delivery

Microsoft Exchange servers break thanks to 'Y2K22' bug

4 Jan 2022