What is machine learning?

No longer science fiction, machines are getting cleverer by the day

AI artificial intelligence

Machine learning is a field of research that focuses on the creation of artificially intelligent systems through a series of trial and error scenarios, without the need for explicit instructions from humans.

The method can be used to build models where systems can analyse data and make decisions entirely based on inference and pattern recognition. ML usually involves creating algorithms based on test data that acts as a reference point for future decisions

What is machine learning?

Machine learning is the process of feeding data into systems to answer questions. Its a type of trial and error scenario for machines where data is given to algorithms to create a model usually for prediction tasks. For example, let's say the data is how many days you've worked and you use a prediction algorithm to analyse the times you were ill or on holiday to build a prediction model of when you were likely to do the same next year.

Often the process requires more than one algorithm. These will be classed as linear or non-linear models, or even neural networks, and are ultimately dependent on both your data set and the problem you're trying to solve.

How do machine learning algorithms work?

The algorithms are programs that learn from data and improve from experience, without any human intervention. These are split into three types: Supervised learning, unsupervised learning and reinforcement learning. Each has a different use and enables systems to use data in various ways.

Supervised learning

Supervised learning involves labelled training data, which is used by an algorithm to learn the mapping function that turns input variables into an output variable to solve equations. Within this are two types of supervised learning: classification, which is used to predict the outcome of a given sample when the output is in the form of a category, and regression, which is used to predict the outcome of a given sample when the output variable is a real value, such as a 'salary' or a 'weight'.

Advertisement - Article continues below

An example of a supervised learning model is the K-Nearest Neighbors (KNN) algorithm, which is a method of pattern recognition. KNN essentially involves using a chart to reach an educated guess on the classification of an object based on the spread of similar objects nearby.

In the chart above, the green circle represents an as-yet unclassified object, which can only belong to one of two possible categories: blue squares or red triangles. In order to identify what category it belongs to, the algorithm will analyse what objects are nearest to it on the chart in this case, the algorithm will reasonably assume that the green circle should belong to the red triangle category. 

Unsupervised learning

Unsupervised learning models are used when there is only input variables and no corresponding output variables. It uses unlabelled training data to model the underlying structure of the data.

There are three types of unsupervised learning algorithms: association, which is extensively used in market-basket analysis; clustering, which is used to match samples similar to objects within another cluster; and dimensionality reduction, which is used to trim the number of variables within a data set while keeping its important information intact.

Reinforcement learning

Reinforcement learning allows an agent to decide its next action based on its current state by learning behaviours that will maximize a reward. It's often used in gaming environments where an algorithm is provided with the rules and tasked with solving the challenge in the most efficient way possible. The model will start out randomly at first, but over time, through trial and error, it will learn where and when it needs to move in the game to maximize points.

In this type of training, the reward is simply a state associated with a positive outcome. For example, an algorithm will be 'rewarded' with a task completion if it is able to keep a car on a road without hitting obstacles. 

Why is machine learning useful?

In essence, ML solves the problem of too much data; we have so much information being generated by people, actions, events, computers and gadgets that to learn anything from them is virtually impossible for humans. Within medical analysis, finding patterns in thousands of MRI scans would take a human many hours, days or weeks to complete, but a machine can ingest that information and spot the patterns in seconds if they are correctly labelled.

Where is machine learning used?

One of the simplest and most successful examples of machine learning is something we use every day - Google Search. The search engine is powered by many ML algorithms that read and analyse the text you put in, tailoring the results based on your search history and online habits. For instance, if you type in 'Java' you'll either get results around the programming language or for coffee surfaced more frequently, depending on which it has determined you'll prefer.

Advertisement - Article continues below

Many of our future technological advancements depend on the development of machine learning, such as driverless cars and smart cities. Many of the systems to power smart cities are entering the public space, such as facial recognition systems where ML algorithms are taught to recognise patterns in images and identify objects based on their characteristics. This, however, has proven to be a controversial use of ML, particularly as it isn't always accurate and often involves some sort of regular surveillance of citizens.

Data bias

As machine learning improves and is used in more technologies, the worry about embedding bias into critical and public-facing software grows. ML applications are dependent on data and it's this data that can be the source of bias. For example, if a company that wants to hire more diversely, but uses its current employee's CVs, by default its machine learning program will only look for more of the same.

It's this type of application of machine learning that has governments worried and, as such, many are resorting to enforcing rules and regulations to combat this issue. The UK's Centre for Data Ethics and Innovation (CDEI) announced it was to join forces with the Cabinet Office's Race Disparity Unit to investigate potential bias in algorithmic decision-making. Likewise, the US government is to pilot diversity regulations for work on AI that reduces the risk of sexual and racial bias within computer systems.

Image by Antti Ajanki AnAj / CC BY-SA 3.0

Related Resources

Application security fallacies and realities

Web application attacks are the most common vulnerability, so what is the truth about application security?

Download now

Your first step researching Managed File Transfer

Advice and expertise on researching the right MFT solution for your business

Download now

The KPIs you should be measuring

How MSPs can measure performance and evaluate their relationships with clients

Download now

Life in the digital workspace

A guide to technology and the changing concept of workspace

Download now


Marketing & comms

AI is just clever marketing, and I’m not buying

20 Sep 2019

AI can play poker, but I’m neither shaken nor stirred

16 Jul 2019

How AI can simplify mergers and acquisitions

3 Jul 2019
neural network

AI is coming for your CVs, news and cat pictures

29 Jun 2019

Most Popular

operating systems

17 Windows 10 problems - and how to fix them

4 Nov 2019
Business strategy

The pros and cons of net neutrality

4 Nov 2019
Domain Name System (DNS)

Microsoft embraces DNS over HTTPS to secure the web

19 Nov 2019
social media

Can Wikipedia founder's social network really challenge Facebook?

19 Nov 2019