If Facebook COO Sheryl Sandberg thought she could put a lid on the huge backlash to the social networking site’s psychological experiments on users, she miscalculated. Her damage-control statement during a meeting with small businesses in New Delhi this week had the opposite effect: it poured fuel on the rage around the world over the secret tests, and raised anxiety over how various data intelligence systems may be compromising our privacy.
What Facebook did a couple of years ago was to manipulate newsfeeds to see how users would react to depressing or happy posts. Users had no clue that a stream of dark posts coming at them was by algorithmic manipulation. Then, if their own posts got gloomy – bingo! – the Facebook researchers would know that their programmed depressing newsfeed had got their human guinea pigs feeling low. These creepy findings were published in an academic journal, Proceedings of the National Academy of Sciences (PNAS):
Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness… When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred.
Newsfeed manipulation is nothing new. Facebook uses algorithms to push posts to the top according to what it considers more relevant to users. The problem here is that it rigged the feed to alter the emotional states of users, without telling them or asking if they would go along with it. PNAS has now put out an “editorial expression of concern” over the June 17 publication of the Facebook research. Its editor-in-chief Inder M Verma says:
The collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent or allowing participants to opt out.
If that sounds lame, you have to read what Sheryl Sandberg had to say on the subject. Here is her public comment in New Delhi, as quoted by The Wall Street Journal:
This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated. And for that communication we apologize. We never meant to upset you.
Well, this ‘apology’ certainly upset a lot of people even more, because she expressed no regrets over carrying out the experiment itself. She was only sorry about the way it was communicated.
What’s ‘informed consent’?
Social media and academic circles alike have been abuzz ever since, with many pointing out that communication wasn’t the issue here, but the way the study was conducted. Mainly, the question is what constitutes ‘informed consent’ when experiments are carried out on human subjects.
Does clicking on the ‘Accept’ button of Facebook’s data use policy, which it tweaks from time to time, constitute ‘informed consent’? David Gorski, a scientist and researcher, calls the very idea “risible”. He writes in a post on sciencebasedmedicine.org:
The Facebook Data Use Policy is more like a general consent than informed consent, similar to the sort of consent form a patient signs before being admitted to the hospital… (But) ‘consents to treat’ or ‘consents for admission to the hospital’ are not consents for biomedical research.
The larger issue here is where we draw the ethical line in cyberspace. Tracking and analyzing user behavior on social and mobile networks has become the order of the day for adtech and ecommerce firms. Where this kind of data collection and use encroaches on our privacy is a very thin line – which is exactly what the UK’s Information Commissioner, who started an inquiry on the Facebook experiment, will be looking into. Similar actions may follow in other countries if this doesn’t blow over.