Your thoughts and desires are no longer your own

As societal nudges edge closer to super-nudges, is our “free” society starting to lose control of its own decision making?

We live in paranoid times, and at least part of that paranoia is being provoked by advances in technology. New techniques of surveillance and prediction cut two ways: they can be used to prevent crime and to predict illness, but they can also be abused for social control and political repression. Which of these one sees as more important is becoming a matter of high controversy. Recent protests in Hong Kong highlight the way that sophisticated facial recognition tech, when combined with CCTV built into special lampposts, can enable a state to track and arrest individuals at will.

But the potential problems go way further than this, which is merely an extension of current law-enforcement technology. Huge advances in AI and deep learning are making it possible to refine the more subtle means of social control often referred to as “nudging”. This means getting people to do what you want them to do, or what is deemed good for them, not by direct coercion but by clever choice of defaults that exploit people’s natural biases and laziness (both of which we understand better than before, thanks to the psychological research of Daniel Kahneman and Amos Tversky). 

The arguments for and against nudging involve subtle philosophical principles, which I’ll try to explain as painlessly as possible. Getting people to do “what’s good for them” raises several questions: who decides what’s good? Is their decision correct? Even if it is, do we have the right to impose it? 

Liberal democracy – which is what we still do just about have, compared to Russia or China – depends upon citizens being capable of making free decisions about matters important to the conduct of their own lives, but what if advertising, addiction or those intrinsic defects of human reasoning that Kahneman uncovered so distort their reckoning as to make them no longer meaningfully free? What if they’re behaving in ways contrary to their own expressed interests and injurious to their health? Examples of such behaviours, and the success with which we’ve dealt with them, include compulsory seatbelts in cars (success), motorbike crash helmets (success), smoking bans (partial success) and US gun control (total failure).

Such control is called “paternalism”, and some degree of it is necessary to the operation of the state in complex modern societies, wherever the stakes are sufficiently high (as with smoking) and the costs of imposition, in both money and offended freedom, are sufficiently low. But there are libertarian critics who reject any sort of paternalism at all, while an in-between position, “libertarian paternalism”, claims that the state has no right to impose but may only nudge people toward correct decisions. An example might be opting in versus opting out of various kinds of agreement, from mobile phone contracts to warranties, from mortgages to privacy agreements. People are lazy and will usually go with the default option, a careful choice of which can nudge rather than compel them to the desired decision. 

Advances in AI are amplifying the opportunities for nudging to a paranoia-inducing degree. The nastiest thing I saw at the recent AI conference was an app that reads shoppers’ emotional states using facial analysis and then raises or lowers the price of items offered to them on the fly. Or how about Ctrl-Lab’s app, which non-invasively reads your intention to move a cursor (Facebook just bought the firm). Since vocal chords are muscles too, that non-invasive approach might be extended with even deeper learning to predict your speech intentions, the voice in your head, your thoughts…

I avoid both extremes in such arguments about paternalism. I do believe that the climate crisis is real and that we’ll need to modify human behaviour a lot in order to survive, so any help will be useful. On the other hand, I was once an editor at Oz magazine and something of a libertarian rabble-rouser in the 1960s. In a recent Guardian interview, the acerbic comedy writer Chris Morris (Brass Eye, Four Lions) described meeting an AA man who showed him the monitoring kit in his van that recorded his driving habits. Morris asked if he felt that was “creepy” but the man replied: “Not really. My daughter’s just passed her driving test and I’ve got half-price insurance for her. A black box recorder in her car and camera on the dashboard measures exactly how she drives and her facial movements. As long as she stays within the parameters set by the insurance company, her premium stays low.” This sort of super-nudge comes uncomfortably close to China’s punitive social credit system: Morris called it a “Skinner Box”, after the American behaviourist BF Skinner who used one to condition his rats.

Featured Resources

Digital document processes in 2020: A spotlight on Western Europe

The shift from best practice to business necessity

Download now

Four security considerations for cloud migration

The good, the bad, and the ugly of cloud computing

Download now

VR leads the way in manufacturing

How VR is digitally transforming our world

Download now

Deeper than digital

Top-performing modern enterprises show why more perfect software is fundamental to success

Download now

Most Popular

The top 12 password-cracking techniques used by hackers
Security

The top 12 password-cracking techniques used by hackers

5 Oct 2020
Google blocked record-breaking 2.5Tbps DDoS attack in 2017
Security

Google blocked record-breaking 2.5Tbps DDoS attack in 2017

19 Oct 2020
What is a 502 bad gateway and how do you fix it?
web hosting

What is a 502 bad gateway and how do you fix it?

5 Oct 2020