What is cognitive computing?

Illustration of a luminescent digitised brain

For all the technological advancements in computing over the years, whether it be the internet, cloud computing, or even super-slimline business laptops, we've never quite produced anything that can match the sophistication of the human brain.

Our own internal supercomputer is capable of processing data in ways that have yet to be replicated fully. We may not be able to retain vast amounts of data or perform complex calculations on demand, but we are able to reason, predict, rationalise, and make our own decisions – skills that are unique to humans. Yet that may not be true for much longer, as researchers are developing new systems that amalgamate the incredibly intricate processes of the human brain with the vast data stores of a computer.

What is cognitive computing?

This is precisely what the field of cognitive computing is trying to achieve. Computing based on cognition, or the processes of the human brain, involves creating systems that are able to self-learn, recognise patterns and objects, understand language, and ultimately operate without the input of a human.

It's often thought of as the third age of computing, having first evolved from simple calculators in the early 1900s to the programmable machine we see mass produced today. It also forms the backbone of most of the experimental forms of computing we see making the news, whether it be artificial intelligence (AI), machine learning, robotics, neural networks or virtual reality (VR).

Unlike traditional systems that simply perform the tasks we've programmed it to do, a cognitive computer is built using machine learning algorithms. The system acquires knowledge by sifting through vast quantities of data, slowly learning to spot patterns and recognise inconsistencies, which it then uses to create predictions. The more data a system is exposed to, the more accurate it becomes when it encounters something new.

Most importantly, cognitive computers are able to adapt to changing requirements or information, and use context to inform learning. In theory, there would never be a need to interfere with a cognitive system, as it would be able to change its parameters based on the needs of the user.

How does cognitive computing work?

RELATED RESOURCE

AI for customer service

IBM Watson Assistant solves customer problems the first time

FREE DOWNLOAD

Unlike regular machine learning, which can be an exploratory and experimental discipline, cognitive computing is specifically about emulating the human brain. To illustrate how this works, we can consider the common nematode worm; a simple create that can forage for food, reproduce, defend its eggs and enact the repertoire of animal behaviour with just over 200 brain cells. Contrast this with your average supercomputer – thousands, if not millions, of times more complex than the nematode, yet nowhere near as adaptable. This is where the potential of a cognitive computer becomes clear.

Most cognitive computing projects work with digital models of neurons inside a familiar electronic framework. It’s an application of Alan Turing’s observation that any computer can emulate any other – and make no mistake, he explicitly referred to his own computer projects as “building a brain”.

Has anyone built a so-called cognitive computer?

With its Watson supercomputer, IBM perhaps holds the crown for the most recognisable cognitive system today. With access to more than 90 servers and 200 million pages of information, it's able to replicate the way a human may answer a question, including the creation of hypotheses and using evidence to back up a theory.

Architecturally, there are huge differences between the nematode’s multi-connected neurons and the much more regimented (but very numerous) chips that make up Watson, but at a basic level IBM’s system relies on relatively simple low-level routines that resemble the functions of an organic brain.

It's hard to argue with the results; in 2011, Watson took part in a specially staged edition of the US TV game show Jeopardy! and easily defeated its human competitors.

IBM's Watson famously demolished human players on Jeopardy in 2011

What can cognitive computing do for my business?

IBM's Watson has already proven to have incredible potential, but only in narrow fields. For example, Watson is being used in the healthcare industry as a tool to work alongside doctors as a know-it-all assistant. It's able to draw upon patient records, previous cases, academic journals, and diagnostics to create a list of recommendations.

As Watson's healthcare results are based on live data, recommendations would take into account emerging breakthroughs and schools of thought at a far greater pace than a human doctor could. This is not to say what the system comes up with is necessarily the right course of action, but it offers a way of analysing data far more efficiently than currently possible for a human.

This type of information gathering could be applied to almost any industry and prove massively beneficial. Whether it be a cognitive system that's able to analyse legal documents and case precedents, or create highly personalised educational experiences based on the age of school children, or even predict criminal activity in a city by looking at strategies and statistics, there's scope for cognitive computing to completely revolutionise industries in a way not seen since the introduction of the programmable computer.

This doesn't mean it would be able to bring this level of computer 'intuition' to another area of expertise immediately though – it would have to ingest entirely new data sets and build its knowledge from the ground up.

When can we start taking advantage of cognitive computing?

You might not realise it, but cognitive computing is available right now, through application programming interfaces (APIs) and cloud services. For example, you can licence IBM’s Watson Assistant to handle front-line customer service, while the Watson Studio platform lets you develop and run AI projects on IBM’s silicon.

Another cognitive-type service is Wolfram Alpha, which answers questions with a Delphic breadth of knowledge and simulated understanding: ask for the time of day and you might get a whole physics text in response. It showcases the potential of computers to provide what we might call cognitive support – something we all need from time to time in this switched-on age.