The Facebook news feed experiment: A despicable breach of user trust

Facebook Like sign

My IT Pro musings usually reflect my opinion about something that happened during the previous week, not a couple of years ago.

So why have I been driven to respond to Facebook conducting a social experiment with unknowing users back in 2012? Well, like most people, I only became aware of it towards the end of last week.

It also stirs up some very real ethical concerns, not to mention privacy issues, and there's a very real lesson here that needs to be understood and implemented by any enterprise that handles customer data or provides a web service.

The June 17th edition of the Proceedings of the National Academy of Sciences of the United States of America (PNAS) included publication of the "Experimental evidence of massive-scale emotional contagion through social networks" paper.

The 'significance' callout is all you need to appreciate the significance of this. It reads: "We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness."

The Facebook researchers call this 'emotional contagion' whereas I'd describe it as a despicable breach of user trust for no good reason.

Yep, you read that right. Facebook happily manipulated the emotions of nearly 700,000 users in order to see if happy or sad emotional states could be passed on in a contagious fashion.

Whether or not this kind of experiment is valid or necessary is besides the point. What's worth noting is that none of the 689,003 people who participated knew they were being experimented upon nor gave their explicit consent.

This incredible invasion of privacy, and I would argue breach of ethical boundaries, took place over the course of a week in 2012. During that time, news feeds for the people involved were biased towards either happy or sad postings, and it was noted whether the responses became generally more positive or negative as a result.

The Facebook researchers call this 'emotional contagion' whereas I'd describe it as a despicable breach of user trust for no good reason.

Who cares if user emotions can be spread, infection-like, through social networks? Other than advertisers, politicians and others with a vested interest in manipulating how we feel? Slippery slope is a phrase that springs to mind, but it's not even that which has got my goat the most. It's Facebook's response to being caught red-handed, dipping into the ethically-questionable honeypot.

That was, essentially, every user signs up to the Terms of Service when joining Facebook, including the Data Use Policy, which states Facebook can use your information "for internal operations, including troubleshooting, data analysis, testing, research and service improvement."

So, it's not illegal as such, but it's morally and ethically corrupt, if you ask me. I could care less if all the analysis was done by computer, which meant no human saw the posts in question. But this just stinks of hand-washing, and avoids the most important question: just how big do you have to be in order to not care about your customers?

This is where it gets a bit more complex, especially as I'm trying to compare this with the enterprise world where 'free service' is something of an anachronism. I suppose you could argue, like advertising, it's a cost:benefit thing.

The customer chooses a no cost option, the provider benefits by throwing adverts at them. However, there's a difference between marketing material which is pretty much a given and conducting large scale experiments which is not.

More to the point, assuming that customers won't care is a big mistake. Worse, thinking that you have the moral right to do anything your legally binding terms and conditions allow is an even bigger one.

I've seen a statement from Facebook when questioned over the incident which claims it does research to 'improve our services' and make content more relevant. This is, frankly, twaddle.

Is Facebook seriously suggesting that because sad content prompts negative comments from users that only happy posts should be allowed? Of course not, it's stumbling around in the dark trying to defend the indefensible.

A data scientist employed at Facebook even pretty much admitted as much in a post where he (Adam D.I. Kramer) said: "In hindsight, the research benefits of the paper may not have justified all of this anxiety."

So here's the lesson to learn folks, and that is critical mass does not mean you can walk all over user privacy.

Even if your legal nerd says it's all above board, that does not make it right. User privacy is an expectation as much as it is legalese in an agreement nobody reads before signing. Facebook users would not expect their data to be used like this, and any organisation which respects its customer base should understand it.

Ignore my advice at your peril. Even if you are Facebook.

Davey Winder

Davey is a three-decade veteran technology journalist specialising in cybersecurity and privacy matters and has been a Contributing Editor at PC Pro magazine since the first issue was published in 1994. He's also a Senior Contributor at Forbes, and co-founder of the Forbes Straight Talking Cyber video project that won the ‘Most Educational Content’ category at the 2021 European Cybersecurity Blogger Awards.

Davey has also picked up many other awards over the years, including the Security Serious ‘Cyber Writer of the Year’ title in 2020. As well as being the only three-time winner of the BT Security Journalist of the Year award (2006, 2008, 2010) Davey was also named BT Technology Journalist of the Year in 1996 for a forward-looking feature in PC Pro Magazine called ‘Threats to the Internet.’ In 2011 he was honoured with the Enigma Award for a lifetime contribution to IT security journalism which, thankfully, didn’t end his ongoing contributions - or his life for that matter.

You can follow Davey on Twitter @happygeek, or email him at davey@happygeek.com.