The media was abuzz the other week with news of a secret study Facebook conducted in 2012, in which it rearranged the content people saw in their news feeds to see if it had an effect on their emotions.
Half the subjects saw slightly fewer of their friends’ positive posts than usual, while the other half saw slightly less of the negative stuff.
The result was a small but measurable correlation: seeing less sad stuff makes us happier; while less happy stuff makes us sad (at least as assessed by the content of our own posts).
While it’s true there was a study, and the 680,000 users involved in it weren’t (and presumably won’t be) contacted, it wasn’t exactly secret. In fact, Facebook data scientist Adam Kramer gushed about it in a 2012 interview that Facebook published at the time). And the study’s publication in the Proceedings of the National Academy of Sciences suggest that at least the data science team was pretty proud of it.
And I’m with them on this one.
As Kramer has pointed out (and this is a drum I’ve banged for a couple of years now), in 10 years Facebook has almost accidentally evolved from a college hookup site to the single greatest record of human behaviour in the history of our species.
There are two ways we could react to this. The first, and this seems to have dominated online discussion so far, is to use the word “creepy” in every second sentence and assume that because it’s a Big American Company, Facebook’s intentions can only be evil.
The more positive approach is to look for the opportunity Facebook’s data represents. With one human in seven using the platform, there’s never been a greater opportunity to understand what makes us tick. The potential for positive outcomes, especially in mental health, is enormous. The greatest harm that could come from the emotional contagion study, I think, will be if Facebook were to put PR ahead of principle and shy away from properly exploring the resource it’s created.
The emotional contagion study wasn’t the only case of Facebook fiddling the feed (setting aside, of course, the countless and constant manipulations it makes in order to, among other things, sell us hamburgers). Also in 2012, it reportedly locked thousands of users out of their accounts to test possible anti-fraud measures. (Tellingly, they all jumped through whatever hoops were put in place to prove their identities and unlock their accounts.)
So why the howls about this particular study? I think it comes down to three things.
Firstly, while advertisers and media owners try very hard every day to manipulate our emotions, Facebook hit a home run. They proved in a relatively robust way (although this has attracted some criticism) that they really can change the way we feel.
Secondly, we cling to the belief that we “own” our Facebook pages and should therefore be in complete control of the content that appears on them.
In reality, Facebook is a free service supported by advertising. That makes us the product.
Finally, and this is the kicker, we’re frustrated because no matter how gross an invasion of privacy some consider this to be, no matter how strong the language we use in criticising Facebook’s actions and no matter how potty-mouthed the names we call Mark Zuckerberg and all his little data demons, we know that not even this, or the next outrage or even the one after that, will convince us to delete our Facebook accounts. ⋅