Machine learning vs AI

These two terms are often used interchangeably, but they are fundamentally different technologies

We're in an exciting time when it comes to the birth of new digital technologies. Things that within most adults' lifetimes were the stuff of science fiction are increasingly becoming part of everyday life.

Nothing's ever simple in the world of tech, though, and perhaps because of this transition from the fantastical to the mundane there can be confusion between different terms and the technologies they refer to.

Artificial intelligence (AI) and machine learning (ML) are two such terms: Although they're often used interchangeably in popular culture and media, they aren't actually the same thing.

The reason for the confusion is likely twofold. Firstly, AI is an attribute applied quite loosely to anything deemed "smart". Home assistants, such as Google Home, Alexa and Siri, are considered AI and are probably the scenario where most people encounter the technology, but it can also refer to some forms of suggested text, such as during email composition. Away from human interactions, AI is also interwoven in other technologies behind the scenes  intelligent load balancing, for example, or some network security systems.

Secondly, machine learning is a subset of AI, meaning that while ML is AI, AI is not necessarily ML. To confuse matters further, ML also has various subdisciplines of its own, such as deep learning and reinforcement learning.

What's the difference between ML and AI?

The history of AI is a long one for thousands of years humans have dreamt of and mythologised machines and other creations that could "come to life", behaving and thinking as humans do. There was a time when early computers, due to their "logical" nature, were also considered a type of artificial intelligence.

In its current manifestation, however, the idea of AI can trace its history to British computer scientist and World War II codebreaker Alan Turing. He proposed a test, which he called the imitation game but is more commonly now known as the Turing Test, where one individual converses with two others, one of which is a machine, through a text-only channel. If the interrogator is unable to tell the difference between the machine and the person, the machine is considered to have "passed" the test.

This basic concept is referred to as "general AI" and generally considered to be something that researchers have yet to fully achieve.

However, "narrow" or "applied" AI has been far more successful at creating working models. Rather than attempt to create a machine that can do everything, this field attempts to create a system that can perform a single task as well as, if not better than, a human.

It's within this narrow AI discipline that the idea of machine learning first emerged, as early as the middle of the twentieth century in fact. First defined by AI pioneer Arthur Samuel in a 1959 academic paper, ML represents "the ability to learn without being explicitly programmed". In other words, where narrow AI may rely on human-built algorithms, a system built using ML could create its own.

Uses and applications

Machine learning

Interest in ML has waxed and waned over the years, but with data becoming an increasingly important part of business strategy, it's fallen back into favour as organisations seek ways to analyse and make use of the vast quantities of information they collect on an almost constant basis.

When this data is put into a machine learning program, the software not only analyses it but learns something new with each new dataset, becoming a growing source of intelligence. This means the insights that can be learnt from data sources become more advanced and more informative, helping companies develop their business in line with customer expectations.

One application of ML is in a recommendation engine, like Facebook's newsfeed algorithm, or Amazon's product recommendation feature. ML can analyse how many people are liking, commenting on or sharing posts or what other people are buying that have similar interests. It will then show the post to others the company think will like it.

ML is also particularly useful for image recognition, using humans to identify what's in a picture as a kind of programming and then using this to autonomously identify what's in a picture. For example, machine learning can identify the pixels distribution used in a picture, working out what the subject is.

Enterprises are now turning to ML to drive predictive analytics, as big data analysis becomes increasingly widespread. The association with statistics, data mining and predictive analysis have become dominant enough for some to argue that machine learning is a separate field from AI.

The reason for this is that AI technology, such as natural language processing or automated reasoning, can be done without having the capability for machine learning. It is not always necessary for ML systems to have other features of AI.


There are hundreds of use cases for AI, and more are becoming apparent as companies adopt artificial intelligence to tackle business challenges.

One of the most common uses of AI is in automation in cyber security, for example, AI algorithms can be programmed to detect threats that may be difficult for a human to spot, such as subtle changes in user behaviour or an unexplained increase amount of data being transferred to and from a particular node (eg a computer or sensor). In the home, assistants like Google home or Alexa can help automate lighting, heating and interactions with businesses through chatbots.

There are well-founded fears that AI will replace human job roles, such as data input, at a faster rate than the job market will be able to adapt to. Author and venture capitalist Kai-Fu Lee, who has worked at both Apple and Google and earned a PhD from Carnegie Mellon for the development of an advanced speech recognition AI, warned in 2019 that "Many jobs that seem a little bit complex, a chef, a waiter, a lot of things, will become automated.

"We will have automated stores, automated restaurants and all together, in 15-years, that's going to displace about 40% of jobs in the world."

Confusing AI and ML

Beyond machine learning and AI, there are a host of other terms often thrown into the mix - to confuse things further. For example, artificial neural networks are designed to process information in a way similar to the human mind and can be used for machine learning too although, once again, not all neural nets are AI or ML, and not every ML program uses an underlying neural net.

As this is a developing field, terms are popping in and out of existence all the time and the barriers between the different areas of AI are still quite permeable. As the technology becomes more widespread and more mature, these definitions will likely also become more concrete and well known. Or who knows, we may develop generalised AI and all the definitions will fall away.

Featured Resources

B2B under quarantine

Key B2C e-commerce features B2B need to adopt to survive

Download now

The top three IT pains of the new reality and how to solve them

Driving more resiliency with unified operations and service management

Download now

The five essentials from your endpoint security partner

Empower your MSP business to operate efficiently

Download now

How fashion retailers are redesigning their digital future

Fashion retail guide

Download now


The IT Pro Panel
Business strategy

The IT Pro Panel

26 Jul 2021
2031: Reimagining the future of life and work
Business strategy

2031: Reimagining the future of life and work

25 Jun 2021
What are the pros and cons of AI?
machine learning

What are the pros and cons of AI?

26 May 2021
Getting everyone on board with the corporate vision
Business strategy

Getting everyone on board with the corporate vision

13 May 2021

Most Popular

The benefits of workload optimisation

The benefits of workload optimisation

16 Jul 2021
Samsung Galaxy S21 5G review: A rose-tinted experience
Mobile Phones

Samsung Galaxy S21 5G review: A rose-tinted experience

14 Jul 2021
RMIT to be first Australian university to implement AWS supercomputing facility
high-performance computing (HPC)

RMIT to be first Australian university to implement AWS supercomputing facility

28 Jul 2021