Is AI workplace monitoring helpful or harmful?

Clock cards and sign-in sheets have been replaced by algorithms that can predict working habits

No matter where we work, be it an office, factory, at home or out in the field, our employers can keep a watchful eye over every move and action we make. That doesn’t just include how frequently we’re emailing or how collaborative we’re being with our colleagues, but also how much of our day is spent making cups of tea or even taking trips to the toilet. 

And that’s just the tip of the iceberg. Thanks to the rise of motion sensors, activity monitors, analytics and the use of artificial intelligence in the workplace it's ever more possible for an employer to know far more about their employees than before.

While there’s no doubt that these technologies can be useful for employers looking to identify inefficiencies in their business model and automate previously human-led tasks - which can lead to a suite of operational benefits - this workplace surveillance has been widely criticised by some who claim such technologies can do more harm than good.

Unforeseen complications

The question of whether this technology-led monitoring is helpful or harmful is a complex and multifaceted issue. For every potential benefit of workplace monitoring - be it the fact it can improve productivity levels or keep employees safe - a new, unforeseen complication is also created.

"Implemented correctly, technology can be a very important tool for both employers and employees," argues Katherine Mayes, programme manager for cloud, data, analytics, and AI at techUK. She cited positives like helping employers avoid bias by basing managerial decisions on merit rather than subjective factors like whether or not they like a staff member. However, she adds that "the increased use of technologies like AI are raising a number of profound legal, social and ethical questions".

That's because using AI in the workplace to monitor staff isn't simply a matter of watching people do their jobs - akin to a 'time and motion study' type approach. AI systems do more than count time. They apply algorithms to draw conclusions by themselves, and it's this that can be a particular cause for concern. Imagine an AI monitoring system in the workplace which sees a dramatic turndown in keyboard activity from one person at a particular time. Is that person being lazy or is there something else going on?

Training and expectations

That might be is a relatively simplistic example, but as Nick Maynard, senior analyst at Juniper Research explains, "the difficult part is making sure that the data used to train the system is free of bias and gives a true reflection".

"This may represent a challenge when trying to explain how systems have flagged a lack of productivity, particularly when it comes to potential disciplinary issues," he adds.

Related Resource

How organisations unlock their data capital with artificial intelligence

The thoughtful application of AI offers hope to organisations looking for actionable insight

Download now

Theo Knott, policy programmes manager for BCS, The Chartered Institute for IT, argues that "it is entirely feasible for an office worker to be away from the keyboard for an hour and be having productive conversations or doing work mentally".

The question then is whether AI can ever infer correctly the difference between what we might flippantly call 'thinking' and 'slacking'. Even if a worker is 'slacking', is it possible for an algorithm to determine whether that downtime is ultimately beneficial or harmful to productivity?

Taking breaks is not only important for our health, but it's also a great way of figuring out answers to complex tasks. Perhaps a chat about the movie you saw at the weekend around the water cooler is precisely the distraction a thorny problem requires.

The output quality from workplace monitoring systems will almost certainly improve over time as the quality of data on which AI's algorithms are based becomes more robust. However, for Theo Knott, we should remain cautious about the rollout of such technology.

"It is likely that accuracy would improve rapidly as technology is improved and datasets become richer," he argues. "Whether the errors on the way to this point are worth it is questionable, but the key is ensuring that things are transparent, so that wrong decisions can be easily challenged."

Transparency matters

The General Data Protection Regulation sets the ground rules for the use of personal data in the workplace, making it clear that employees should know what data is being collected and why, as well as setting out requirements for data retention.

A key point here is maintaining transparency with the workers themselves. As Matt Creagh, employment rights officer for the Trade Union Congress, explains, "it's important to remember that working people have a right to privacy, and this right extends to the workplace." He continued, "Tracking and surveillance software should only be used with the agreement of a workplace union or the workforce."

And this isn't just a matter of law - it is one of good practice too. As Katherine Mayes points out, "employers have a responsibility to engage with staff on this debate and work together to carefully determine how AI can be used to support and empower the workforce. If businesses get this wrong, they risk undermining workplace morale which could lead to staff resignations."

Featured Resources

Managing security risk and compliance in a challenging landscape

How key technology partners grow with your organisation

Download now

Evaluate your order-to-cash process

15 recommended metrics to benchmark your O2C operations

Download now

AI 360: Hold, fold, or double down?

How AI can benefit your business

Download now

Getting started with Azure Red Hat OpenShift

A developer’s guide to improving application building and deployment capabilities

Download now

Recommended

How to become a machine learning engineer
Careers & training

How to become a machine learning engineer

23 Dec 2020
Data science fails: Building AI you can trust
Whitepaper

Data science fails: Building AI you can trust

2 Dec 2020
MLOps 101: The foundation for your AI strategy
Whitepaper

MLOps 101: The foundation for your AI strategy

2 Dec 2020
Realising the benefits of automated machine learning
Whitepaper

Realising the benefits of automated machine learning

2 Dec 2020

Most Popular

Citrix buys Slack competitor Wrike in record $2.25bn deal
collaboration

Citrix buys Slack competitor Wrike in record $2.25bn deal

19 Jan 2021
How to recover deleted emails in Gmail
email delivery

How to recover deleted emails in Gmail

6 Jan 2021
SolarWinds hackers hit Malwarebytes through Microsoft exploit
hacking

SolarWinds hackers hit Malwarebytes through Microsoft exploit

20 Jan 2021