The CLAIMS (Causal language and strength of inference in academic and media articles shared in social media) study is a systematic review of the state of causal inference in health research as it reaches the consumer through social media. This site serves both as a public explainer of the study and a public access repository of the full protocol, review tool, dataset generated during the review process, analysis code, reviewer profiles, and results from the CLAIMS study. In addition, we will be discussing this study and related topics in the main blog section.
The full, published version of this study is in PLOS ONE, and can be found here.
Citation: Haber N, Smith ER, Moscoe E, Andrews K, Audy R, Bell W, et al. (2018) Causal language and strength of inference in academic and media articles shared in social media (CLAIMS): A systematic review. PLOS ONE 13(5): e0196346. https://doi.org/10.1371/journal.pone.0196346
Please contact Noah Haber (nhaber@unc.edu) for more information.
Study summary
What is the CLAIMS study? The CLAIMS study focuses on one of the most important aspects of health research: causal inference. Causal inference is the difference between being able to say “people who drink more coffee live longer on average” vs. “changing the amount of coffee you drink will change your life expectancy.” While they sound similar, the former might be true for reasons that have little to do with drinking coffee. If a headline says that “new study finds that X is linked to Y,” we are usually (but not always) interested in knowing if it shows that changing X causes a change in Y. Weak causal evidence does not mean that a causal relationship doesn’t exist, just that the study methods, data, and results do not sufficiently eliminate other explanations. We wanted to find out how strong the causal inference was among scientific articles likely to be read by consumers, how much they were overstated (if at all), and whether the most-shared media articles about them were accurate. To do that, we first identified the 50 studies about some exposure vs. a health-related outcome that were most likely to be consumed in social media in 2015, using social media sharing statistics of internet news media articles. We then built a systematic process to estimate strength of causal inference and language. We gathered a team of expert reviewers to review the academic and media articles in our sample of scientific and media articles. We found evidence of weak evidence and overstatement across the board. If you clicked a link in 2015 that said that a new study finds that X is linked to a health-related Y in 2015, it was likely to be relatively weak causal inference, but slightly overstated in the academic studies, and further overstated and inaccurately described in the media articles in our sample. But before you stop there, it’s important to understand how we made these conclusions to understand what these results do and don’t mean. What did we do? 50 seems like a small number, but those 50 studies and 64 media articles represented over half of all shares of health research shared in social media in 2015. That’s a lot. Reviewers Our review process relied heavily on having qualified reviewers. We found 21 voluntary reviewers from 6 institutions and several fields of statistical health science to help us out. Most of our reviewers were doctoral students at major public health schools in epidemiology or econometrics. You can check out the full CLAIMS team here. Every study was reviewed by 3 randomly selected reviewers from our pool, with two independent reviewers and one to arbitrate and give the final rating for each article. Review tool Systematically estimating strength of causal inference is hard. We started by making a first of its kind review tool for this study, having reviewers assess various aspects of studies and articles before deciding on a summary strength rating scale. The scale is based on the idea that we would like to compare information on both what is useful for practice and against a hypothetical most useful “ideal” study (which may be impossible to carry out). At the end, our ratings are subjective, but it’s the subjective opinion of experts who analyzed these studies in detail using the same criteria for judgement. The reviewers then explored the language used in the academic article for signs of overstatement of results, and reviewed the media article(s) for accuracy. What did we find? When looking at the media articles, we found that 44% of the media articles used causal language that was stronger than the academic articles, remembering that many of those studies were overstated to start with. Furthermore, 58% of the media articles contained at least one substantial inaccuracy about the study. You can check out more of our results in the data explorer section of this site. So what does this all mean? What does it mean? Unfortunately, our results are complex, and likely to be oversimplified into certain narratives. Our results give us some hints as to places we might look for more specific evidence on mechanisms which could have caused this result, but they aren’t really able to lay “blame” anyone in particular. To help out with interpretation, here are a few examples of good and bad interpretations of our study. We know that it’s hard to craft an accurate headline with small numbers of characters, so we want to help, in 280 characters or less. Reasonably accurate, bite-sized headlines: “Expert reviewers find that the studies and related articles shared in social media linking something with a health result are likely to be overstated and/or inaccurate.” “Review study finds that the health research we see in our social media feeds is likely to be overstated and/or inaccurate.” “Expert review panel study finds that the headlines in your social media feed that say that a study links something with a health outcome are likely to be overstated and inaccurate.” “Headlines in your social media feed that say ‘X is linked to some health outcome’ are unlikely to represent strong evidence that X actually causes the the outcome.” Maybe a slightly cheekier one: “Correlation does not imply causation, unless you are the language describing results of studies shared in social media about the links between things and health outcomes.” There are a few key points that are emphasized in the above tweets. Firstly, they specifically point out that these are items that are picked up in social media, and not representative of all of academia or media. Secondly, they emphasize what this study is: a review from a panel of skilled reviewers. Most importantly, these headlines don’t say certain tempting misinterpretations of this study. Inaccurate headlines: “Academia is producing a lot of health studies that are weak, wrong, and/or exaggerated.” “The media inaccurately reports on scientific findings.” “The effect of social media on science reporting.” “Social media is the reason that science can’t have good things.” The problem here is that this study was not designed to ask these questions and show these results. The first two are improved quite a bit if you add “that you are likely to see in social media,” but still not great. We might get some hints as to some of this going on, but we really need more study to do so. The last two are inaccurate because we don’t actually look at the causal effect (see what we did there?) of social media on studies or media. If you want to talk about this study in public, whether you’re writing an article or tweeting, we strongly recommend contacting us first. We’re happy make sure what you write is an accurate description of our work. What happens now? We are working on a ton of new studies to get a handle on exactly WHY things are the way they are, figuring out ways to efficiently measure quality of work, and figure out how to best get that information out to both the public and scientific communities. We have some ideas (read: LOTS of ideas), but more study is desperately needed before we can really tackle problems. Stay tuned, lots more coming.The pathway from when evidence gets created to when we see it is long, containing many steps and processes which can lead to weak inference, overstatement, and/or misinformation by the point of public consumption. All of these processes — like publication bias, clickbait media articles, social media selection, or any number of other issues — add up by the time research reaches you, the consumer. While we know small pieces of ways things go wrong, it’s hard to see the whole picture. The first steps to figuring out how we can best intervene is to 1) examine what the state of the world looks like by the time it reaches consumers, and 2) experiment with methods to measure those problems.
To figure out what was most shared in 2015, we partnered with NewsWhip, a social media analytics company. We searched their dataset to identify what health studies were most shared on Facebook and Twitter in 2015. We then took that list, and picked up the ones that were about single scientific studies of the form X vs Y and in human populations, continuing down the list from most to least popular until we identified 50 unique academic studies from 64 media articles.