A tempting source of data, social media is uncharted ethical territory for medical research

social-media-data

Earlier this year, researchers working for Facebook and Cornell University published a study in the Proceedings of the National Academy of Sciences detailing how they systematically manipulated the news feeds of more than 689,000 Facebook users for one week in 2012. They tweaked Facebook’s algorithm to show some users slightly more posts with a “positive” tone (posts that used words deemed positive by software filters), while others saw slightly more posts with a “negative” tone. They were interested in seeing how these different ratios of positive or negative content affected those users’ own output. In the end, people who saw more positive posts were more likely to use positive language themselves, and vice versa.

On its surface, this kind of experimentation is hardly unusual. Facebook manipulates its news feed all the time to change which status updates, photos, and—of course—ads its users see. But this particular study struck a nerve because it deliberately manipulated users’ emotions (as opposed to, say, which brand of shoes they want to buy), and because none of those nearly 700,000 users knew they were part of a grand social media experiment in the first place.

The issue of whether or not what Facebook’s data scientists did was legal and within the bounds of ethical treatment of human research subjects is very much up for debate (Michelle Meyer, a bioethicist at the Union Graduate College-Icahn School of Medicine at Mount Sinai wrote an exhaustive explainer for Wired here). This experiment may be a special case because the researchers had special access to manipulate Facebook’s algorithms directly. But the millions of status updates, photos and videos we post online every day, publicly and willingly, make a tempting trove of data for other researchers looking to mine social media data.

Some of the more successful uses of social media data for scientific research have applied network theory to understand how information flows through social networks and how members influence each other. Researchers from Italy have been able to use Twitter to track disease epidemics (an improvement on Google’s famous Flu Trends, which apparently doesn’t work very well). University of Chicago Medicine’s own John Schneider, MD, has long studied how data from cell phones can be used to map social networks and identify key influencers to stop the spread of HIV among high-risk communities in India.

Social media is also a useful tool to help researchers recruit and communicate with study subjects. But unless these participants explicitly agree to be contacted on social media, or give consent for researchers to use their public data, these practices raise ethical issues. In a recent article in the American Journal of Bioethics, Jeanne Farnan, MD, an associate professor of medicine at the Pritzker School of Medicine, addresses this murkier area of research.

Jeanne Farnan, MD, MPHE

Jeanne Farnan, MD, MPHE

“Social media platforms are powerful tools to reach and advocate for patients. They represent a potential wealth of knowledge for discovery and may represent the future of ‘big data’ and scholarly work,” she writes. “However, before indiscriminately tapping into this data, the ramifications and the appropriateness of such a process must first be described, in order to protect patients and their privacy, the doctor–patient relationship, and also the integrity of the scholarly work.”

Farnan, who studies the relationship between social media and professionalism in medicine and advises medical students on proper use of social media during their training, says collecting data from social media is problematic because it’s difficult to verify users’ identity and gauge their intent. For example, Facebook gives users tools to restrict who sees their posts, but many users don’t bother to change the default settings. They may assume what they’re posting will be seen only by their immediate social circle, and would never agree to it being used for research.

On the other hand, people behave differently online than they might in real life, posting things they don’t necessarily believe because they think no one is watching, or to get a rise out of others (e.g. “trolling”).

“It’s the Hawthorne Effect in reverse. They know they’re not being watched, so is that a more true representation of what they really feel, or are they saying what they think people want them to say?” Farnan said. “You can’t guarantee someone who would demonstrate those behaviors had they known they were being watched, or that they were sharing this outside of their usual group.”

As for study recruitment and retention, Farnan said social media is an obvious choice because it has become a primary method of communication for so many people. As long as researchers are up front about how they will use social media to communicate with study participants, it’s a no brainer. But because tools like Facebook, Twitter and peer-to-peer messaging apps are still relatively new, this kind of consent often isn’t explicitly granted. Using social media to track down research subjects later, or to check up on patients to make sure they’re following a doctor’s orders can be damaging to the public’s sense of trust in the medical community.

“This technology is like a tumor outgrowing its blood supply. It’s growing so rapidly, policy cannot keep up,” Farnan said. “It’s so difficult for regulatory agencies and institutional review boards to grasp the potential downstream complications accessing social media data of patients. But I do think it’s a discussion that needs to happen, because there are amazing and interesting things you can do with it.”

About Matt Wood (507 Articles)
Matt Wood is a senior science writer at the University of Chicago Medicine and nonfiction editor for Another Chicago Magazine.
%d bloggers like this: