Breaking Social Norms, a Q&A with Yoav Gilad

Last December, a study in the Proceedings of the National Academy of Sciences presented evidence for an extraordinary finding – that human and mouse tissues were genetically more different than previously thought. The authors, members of the Mouse ENCODE consortium, concluded that gene expression in a mouse heart, for example, is more similar to a mouse kidney than to a human heart. This ran counter to previous studies, and as mice are a near-ubiquitous tool in human genetics research, the result was field-shaking. It also left many scientists puzzled.

Yoav Gilad, PhD, professor of human genetics

Yoav Gilad, PhD, professor of human genetics

Yoav Gilad, PhD, professor of human genetics, was one such puzzled scientist. Together with postdoctoral researcher Orna Mizrahi-Man, PhD, Gilad spent three months reanalyzing raw data from the original study. According to their reanalysis, the ENCODE authors did not account for batch effects – errors that arise from the way different batches of samples are processed during experiments (this applies even for NFL football scandals). When Gilad and Mizrahi-Man removed batch effects, the conclusion of the study reversed.

Instead of a traditional route to disseminate these critiques, Gilad took to Twitter. On April 28, he tweeted the main figures from his reanalysis. It caused an immediate sensation. A few weeks later in May, Gilad and Mizrahi-Man published their full results online in the open science publication platform F1000Research. The scientific debate is recapped in news articles published in Nature and The Scientist, and continues in full public view.

However, there is another debate. Is Twitter the right forum for scientific critique? ScienceLife spoke with Gilad about his decision to “break social norms” by taking science to social media.

Why did you decide to put your results on Twitter?

Yoav Gilad: The decision to go on Twitter was actually pretty impulsive. I’ve been involved in efforts to offer criticism or correction for papers through the typical channels – communicating with the original author, submitting the manuscript for review by the same journal and submitting it to another journal if rejected. This process is very, very long, and in most cases the criticisms are either ignored or don’t receive nearly as much attention as the original report. What happens is people just go and read the paper, think about it, and then there’s usually not a lot of conversation after that. Often, years later, you find the original report still resonates more with people than the correction or criticism.

Orna and I were still working on the paper, and we had this result. I joined Twitter last fall because I felt like I was missing a lot with respect to the scientific discussions happening between my colleagues – not just ‘hey this paper was published,’ but also some deeper thoughts about the science. So I thought it’d be a nice experiment to see if we could generate some attention. I put out the main figures with a very neutral statement. The figures spoke for themselves.


What was the response like?

YG: The response was huge. There were 30 or 40 thousand people who viewed the initial tweet. The vast majority were scientists, including students, postdocs and faculty all over the world. Immediately people asked for more details. They tried to understand what we did and how, and started discussing the merit of our headline result. I answered that there’s a preprint on the way, and that it’s probably more useful to wait for that because it’s hard to communicate more than headlines over Twitter.

In the meantime, it was widely shared, probably because the figures were pretty self-explanatory. For people who understand that type of analysis, if the detective work to figure out the original study design is true, the rest kind of follows. And also partially because there’s a measure of trust. If you tweet the headline, you believe in your result.

What was your response to the response?

YG: We thought that we would be able to submit the paper within a few days after we teased the figures, but the response got us a little bit anxious. We took a bit more time and sent the paper to a few expert colleagues for additional review before we submitted the paper. After that, it was obvious that we would continue the conversation on Twitter.

Were you happy with how your Tweets were received?

YG: I think that this was an example of the great advantage of social media. It just immediately took off. When the paper was put online, it was viewed nearly 10 thousand times in one week. That’s a number that you get in a couple of months, for my papers at least. So it certainly received more attention than most. Somebody described it as the carriage ahead of the horses. It’s kind of true. But if we did it the other way around, I don’t think it would have received the same response. We’ll never know now, but that was my gut feeling.

Were there criticisms to your approach?

YG: There wasn’t when we first tweeted the figures. When the paper came out and people could see more of the details, we retweeted again. The PI on the PNAS paper made a comment about breaking social norms, but I have not seen anything else. I’ve seen a lot of support for our choice, but it’s a very biased sample [laughs]. The people on Twitter supported the discussion on Twitter. Whether there’s a majority of people not on Twitter that thought this is or should be unacceptable, I don’t know because they’re not on Twitter.

edyongtwA lot of people abstained from the online conversation didn’t they?

YG: There were people who emailed me comments and suggestions or just words of encouragement, but explicitly asked me to stay anonymous. While I appreciated those suggestions and especially the support, I think it reflects badly on science that there’s this sense that there might be reprisal or harm if you express criticism – especially of an ENCODE study that’s led by leaders in our field. The other thing was that Mouse ENCODE is a large study, with few dozen investigators. I know that some of them are very active on Twitter and usually participate in conversations. They’ll ask a question, they’ll offer an opinion, they’ll dig up a reference. But they did not partake in this particular discussion. I thought that was a little awkward. What is the motivation to stay silent here? Is it a cost-benefit analysis? It seems like it’s missing the point.

I tweeted about this and a few people replied that Twitter storms can become about the person and not about the science. I think that’s a very reasonable answer. But I do want to say that we managed to keep this discussion very much on point. It was a discussion that stayed away from the people, and focused on the issue. And I’ve seen many other discussions on Twitter stay on point between scientists.

Do you think about possible reprisal?

YG: No, and I don’t even understand why this question comes up. We are scientists and anything that gets us faster to the truth should be encouraged. The Mouse ENCODE investigators are all prominent scientists. Nothing about them that I know suggests that they are people who will take this personally. We just found a mistake and we’re talking about it. Nobody doctored their data, nobody did anything misleading on purpose, nobody did anything that we’d consider unacceptable in science.

I certainly don’t expect all my papers are right, and I don’t expect in the future that when someone finds a mistake in my papers they’ll be afraid to put it forward. If you have 100 papers and no corrections, you’re not a scientist. Because either you don’t care to correct, or you don’t believe you’ve ever made a mistake. Either way, you can’t be a scientist. Corrections shouldn’t be bad for your career. It’s actually the other way around, it means you care. You can’t put out 100 papers and make no mistakes.


Are you afraid you could have made a mistake?

YG: I always have that sense. The moment a paper leaves your office, you’re apprehensive. In this case, I felt reasonably confident that enough eyes had seen it and that we had done everything we could to ensure accuracy. There’s no assurance that as we speak, someone’s finding some error in our paper, but we have done everything we know how to do to minimize that probability. This has been reviewed more carefully than anything I’ve ever published. More people are reading it with the explicit intent to figure out the details than probably any of my other publications. So I would say this has been seen and effectively reviewed by my peers already.

How did these issues make it past peer review in the first place?

YG: Most of the time, reviewers spend a couple of hours, maybe a day or two, to look for obvious errors before they send it off on its merry way. We took three months to make our conclusions. I’m not a supporter of the belief that papers that go through peer review are always correct, but I also don’t blame reviewers when things are missed. It takes a lot of effort to be able to scrutinize work to discover issues like this. I think post publication review by people who have vested interest in this is the best solution. Where readers who are truly interested in the work invest time to figure it out.

It’s working in this case isn’t it? Hasn’t the online discussion helped uncover more errors?

YG: Yes, we and other researchers have described additional problems in the F1000 discussion section. These have to do with the way the original tissues were prepared for RNA extraction. All the human samples are from tissues that were extracted from autopsies, from adults of a wide range of ages and with varying causes of death. The mice samples were all fresh tissues from mice that were sacrificed between eight to 10 weeks old. We expect human RNA to be degraded and mouse RNA to be fresh. In addition, there are gender discrepancies for many of the tissues. Female mouse tissue is compared to human male tissue, and vice versa. There’s sexual dimorphism in gene regulation, so this adds another confounder.

Will you take science to social media again in the future?

YG: I thought the experience was positive. I now routinely tweet about my own papers. But I don’t have any plans on taking on any more criticism. In this particular case, I thought it needed to be done and I was very interested. But this is not going to become a standard procedure here [laughs].


About Kevin Jiang (147 Articles)
Kevin Jiang is a Science Writer and Media Relations Specialist at the University of Chicago Medicine. He focuses on neuroscience and neurosurgery, orthopedics, psychology, genetics, biology, evolution, biomedical and basic science research.
%d bloggers like this: