Just a quick announcement: Disappointed and concerned about the state of statistical health science? So are we. We made our mega project plans based on the CLAIMS study public. It’s all available here:
Association is a powerful research tool to answer the right questions with the right methods, but not for the kinds of questions and methods for which you need causal inference. Stranger yet, the culture of “don’t use the word cause” is so strong that there are even papers which find really strong evidence of causality, but stay on the “conservative” side and just say association.
In CLAIMS, reviewers found that 34% of authors in our sample used stronger technical causal language than was appropriate given the methods. Most academic authors follow the technical rules, if not their spirit. What about the remaining 66%? How many of those implied causality through means other than “technical” language? Can we reasonably infer how these studies might have mislead through sloppy methods, hints, nudges, and reasonable misinterpretation?
If technical language is an unreliable method of determining whether the study implied causality, how can we infer those implications? I have a few ideas below for discussion, but would LOVE to hear your thoughts on where I get this wrong, better explanations, general disagreements, etc.
Go straight to the discussion section and read what the authors say people should do or change based on their results. In almost all cases that the authors recommending changing main exposure to change the outcome of interest, they implied causality. A study about the association between coffee and cancer that concludes that you should drink more or coffee to avoid cancer, or even if they simply say coffee is “safe” to drink, relies on estimating the causal effect of coffee on cancer. If their methods weren’t up for the task, the study is misleading.
In general, if the study was truly useful for association only, changing the exposure of interest will usually not be the main action implication. If the question of interest is disparities in outcomes between groups (such as race), the authors would, in general, not suggest that people switch groups. Similarly, finding associations to better target interventions don’t imply that we need to change the exposure, but rather that the exposure is a useful metric for identifying targets of interventions.
This can get tricky, particularly when the exposure of interest is a proxy for changing something else that is harder to measure, such as laws as a proxy for the causal impact the political and cultural circumstances that brings about a change in the law, plus the impact of the law itself. As usual, there is no simple rule or formula to follow.
Question of interest
In the great words of Randall Munroe of XKCD (hidden in the mouseover): “Correlation doesn’t imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing ‘look over there.'”
Some associations inherently imply causality. Virtually every study in which individual consumption of something is the exposure and some health effect is the outcome of interest implies causality. One way in which the association might inherently imply causation is simply the lack of useful alternative interpretations. For example, there is little plausible reason why merely studying the association between coffee in cancer is useful for anything except when you have identified causal effects of coffee on cancer.
I find it helpful to try think about plausible ways that the association between X and Y can be useful, firstly in my head and secondly from those the authors describe. For each item, I strike out ones that require causality to be inferred. If I have no items remaining and/or if the remaining items seem implausible, that may hint that the question of interest has inherent causal implications. Even then, there are two caveats: 1) my inability to come up with a good non-causal use does not mean one does not exist, and 2) even if one does exist, the association could still inherently imply causation.
Look for language in the grey zone
The list of words that are taboo because they mean causality is short, consisting mainly of “cause” and “impact.” The list of words in the grey zone is much longer, and not always obvious. My personal favorite is the word “effect.” For some reason, the phrase “the effect of X on Y” is more often considered technically equivalent to “the association between X and Y” than “the causal impact of changing X on Y.” While “effect” is sometimes used purely as shorthand, I find that it is more often used when authors want to imply causality but can’t say it. Curiously, “confounding”/”confounders” is not on the causal language taboo list, even though it implies causation by definition.
Some statistical methods and data scenarios strongly imply causality. In many cases, this is simply because the methods eliminate all alternative interpretations, such as when authors control for dozens of “confounding” covariates. Some methods are developed specifically to estimate causal effects, and have limited application outside of causal inference.
This one is unfortunately in the statistical/causal inference experts only zone, since it requires a fairly deep understanding of what the statistics actually do and assume to tease out implications of causality.
Intent vs. implication
It is important to understand that the study authors making these implications aren’t generally bad people, and may genuinely not have intended to imply causality when inappropriate. In some cases, they may simply not mean to make causal implications. In other cases, they may have been led to certain uses of language by reviewers, editors, co-authors, or media writers. Alternatively, the most misleading articles are simply the ones that will be most likely to be published and written about, and therefore most likely to be seen.
However, as always, some of the blame and responsibility lies with us, the researchers. We should be careful generating studies where causation is implied, regardless of what the technical dogma tells us is right and wrong. We should learn to be more honest about what we are studying, embrace the limitations of science and statistics, and fight to create systems that allow us to do so.
At the time of this writing, five unique media articles were written about the CLAIMS study, four unique press releases, and a few copies in different outlets, have been published. We have them all listed here and at the bottom of this page, which we will be updating on a continuous basis.
So, how (in our opinion) did the media do?
TL;DR: The handful of media outlets which covered CLAIMS did a pretty decent job.
1) Coverage was limited, and mostly from small outlets
No huge surprise there. CLAIMS is a bit of a niche study, albeit one designed to be the foundation of studies which are not-so-niche. It involves academia, media, and social media, but without providing a clear narrative of what we are supposed to do about it. The study caught on a bit in smaller outlets, but none of the giant mainstream ones, which is roughly what we expected to see, noting that very few research articles receive any coverage at all. The largest media outlet that covered our article is probably HealthNewsReview.org, which mostly covers and critiques news media coverage of health studies. Limited exposure is almost certainly for the best given the slightly complicated nature of our results, but it definitely limits what we can infer about how the article was covered. That being said…
2) Most outlets had a (very) slight preference for a particular narrative
As above, CLAIMS doesn’t and can’t, say that any particular party – like academia, researchers, journals, news editors, journalists, or social media sharers – more to “blame” for our result than any other. However, most of the articles had a bit of a focus on one particular party over the others. Some focused a bit more on the media side, and others a bit more on the academic side. These were typically fairly small leanings, and probably not a big deal. We did not observe anything close to extreme skew, like claiming that our study finds that academia, media, or social media are “broken” or similar.
3) Most (but not all) journalists contacted the team for quotes and pre-publication clarifications
It’s nice when we can be involved with the way that our articles are being communicated, particularly when we go through the efforts that we do to explain our study here on metacausal.com. Our approval is not required by any means, and we respect journalistic independence, but sometimes it helps. Science is complicated and easy to mistranslate. Most authors reached out to us for quotes, and of those, most gave us a copy of their article before they published it to check if we had corrections. That probably helped with accuracy.
So, all in all, pretty good job. Some more minor notes below:
4) Some sites reported (wrongly) that only RCTs can produce strong causal inference
This was a bit of an odd one, and in one case we wrote specifically to the media authors in an effort to correct this mistake (to no avail). RCTs certainly make causal inference much easier, but it isn’t the only way you can get strong causal inference. This mistake appears in articles from outlets that are more critical of news media and health research, which is slightly ironic. Sometimes simplifications are necessary, but in my opinion, this one can only do harm.
5) Don’t read the comments on news articles
It’s the internet. As most have learned by now, comments on news articles are terrible, and these are not particularly exceptional. The comments on these articles vacillate between reasonable discussion and absolute nonsense. I looked so you don’t have to. You’re welcome.
|Findings in science, health reporting often overstated on social media||Harvard Gazette / Harvard TH Chan School of Public Health||Press release||June 5, 2018||Study authors worked with press office for this press release|
|Can’t say we didn’t warn you: Study finds popular health news stories overstate the evidence||HealthNewsReview.org||News / blog article||June 13, 2018||Article author interviewed Noah Haber before publication|
|Health misinformation in the news: Where does it start?||KevinMD.com||News / blog article||June 20, 2018||Nearly identical to HealthNewsReview.org article|
|Study examines the state of health research as seen in social media||University of North Carolina at Chapel Hill Gillings School of Global Public Health||Press release||June 19, 2018||Study authors worked with press office for this press release|
|Overdrijven de media gezondheidsnieuws?||EIS Wetenschap||News / blog article||June 20, 2018|
|Karra publishes study in PLOS One||Boston University||Press release||June 4, 2018|
|UNC Study Examines the State of Health Research As Seen in Social Media||Association of Schools & Programs of Public Health||Press release||June 28, 2018||Appears to be a direct copy of UNC press release|
|Redes sociales han alterado la forma en que se presentan las noticias de salud||FNPI||News / blog article||Unknown, published at least before June 28|
|Echoing the network||Nieman Lab||News / blog article||August 6, 2018|
|Health News In Crisis?||European Journalism Observatory||News / blog article||July 18, 2018|
|'A large grain of salt': Why journalists should avoid reporting on most food studies||CBC News||News / blog article||September 6, 2018||Article author interviewed Noah Haber before publication|
|Il giornalismo sanitario è in crisi?||European Journalism Observatory||News / blog article||September 21, 2018||Appears to be a reposting of a previous EJO article|
There is a recent trend among the health statisticians to discourage the use of p-values, commonly used to define as a threshold at which something is “statistically significant.” Statistical significance is often viewed as a proxy for “proof” of something, which in turn is used as a proxy for success. The thought goes that the obsession with statistical significance encourages poorly designed (i.e. weak and misleading causal inference), highly sensationalized studies that have significant, but meaningless, findings. As a result, many have called for reducing the use of p-values, if not outright banishing them, as a way to improve health science.
However, as usual, the issue is complicated in important ways, touching on issues of technical statistics, popular preference, psychology, culture, research funding, causal inference, and virtually everything else. To understand the issue and why it’s important (but maybe misguided) takes a bit of explaining.
We’re doing a three-part set of posts, explaining 1) what p-values do (and do not) mean, 2) why they are controversial, and 3) an opinion on why the controversy may be misguided. Here we go!
What does a p-value tell us?
A p-value is a measure that helps researchers distinguish their estimates from random noise. P-values are probably easiest to understand with coin flips. Lets say your buddy hands you a coin that you know nothing about, and bets you $10 that it’ll flip heads more than half the time. You accept, flip it 100 times, and it’s heads 57 times and tails 43. You want to know if you should punch your buddy for cheating by giving you a weighted coin.
To start answering that question, we usually need to have a hypothesis to check. For the coin case, our hypothesis might be “the coin is weighted towards heads,” which is another way of saying that the coin will flip heads more than our baseline guess of an unweighted 50/50-flipping coin. So now we have a starting place (a theoretical unweighted coin, or our “null hypothesis”) and some real data to test with (our actual coin flips) and someThis triggers the tooltip, or “alternative hypothesis”).
One way we could compare these hypotheses is if we had a coin which we knew was perfectly balanced and we 1) flipped it 100 times, 2) recorded how many were heads, 3) repeated 1 and 2 an absurd number of times (each of which being one trial), and 4) counted how many of our trials had 57+ or more heads flipped and divided that count by the total number of trials. Boom, there is an estimated p-value. Alternatively, if you know some code, you could simulate that whole process in a program without knowing any probability theory at all.
Even better, we can estimate that number empirically without having to flip a bajillion coins, using what we know about probability theory. In our case, p=0.0968. That means that, had you done this 100-flip test infinite times with a perfectly balanced coin, 9.68% of those 100-flip tests would have had 57+ heads.
To interpret a p-value, you might say “If the coin wasn’t weighted, the expected probability that we would have gotten 57 or more heads out of 100 had we repeated the experiment is 9.68%,” or “9.68% of trials of with 100 flips would have resulted in having the same or greater number of heads than we found in our experiment.”
Should you punch your buddy? That depends on whether that number above is meaningful enough for you to make a punching decision. Ideally, you would set a decision threshold a priori, such that if you were sufficiently sure the game was sufficiently rigged (note the two “sufficiently”s), you would punch your buddy.
The default threshold for most research is p≤0.05, or 95% “confidence,” to indicate what is “statistically significant.” With our p=0.0968, it isn’t “significant” at that level, but it is significant if we make our threshold p≤0.010, or 90% confidence.
If you wanted to interpret your p-value in terms of the threshold, you might say something like “We did not find sufficient evidence to rule out the possibility that the coin was unweighted at a statistical significance threshold of 95%.”
A p-value is a nice way of indirectly helping answer the question: “how sure can I be that these two statistical measures are different?” by comparing it with an estimate of what we would have expected to happen if it didn’t. That’s it.
Things get a bit more complicated when you step outside of coin flips into multivariable regression and other models, but the same intuition applies. Within the specific statistical model in which you are working, assuming it is correctly built to answer the question of interest, what is the probability that you would have gotten an estimated value at least as big as you actually did if the process of producing that data was null?
What DOESN’T a p-value tell us?
Did you notice that I snuck something in there? Here it is again:
“Within the specific statistical model in which you are working, assuming it is correctly built to answer the question of interest…”
A p-value doesn’t tell you much about the model itself, except that you are testing some kind of difference. We can test if something is sufficiently large that we wouldn’t expect it to be due to random chance, but that does doesn’t tell you much about what that something is. For example, we find that chocolate consumption is statistically significantly associated with longevity. What that practically means is that people who ate more chocolate lived longer (usually conditional on / adjusted for a bunch of other stuff). What that does NOT mean is that eating more chocolate makes you live longer, no matter how small your p-value is, because the model itself is generally not able to inform that question.
The choice of threshold is also arbitrary. A p-value of .049 vs. 0.051 are practically exactly the same, but one is often declared to be “statistically significant” and the other not. There isn’t anything magical about the p-values above and below our choice of threshold unless that threshold has a meaningful binary decision associated with it, such as “How sure should I be before I do this surgery?” or “How much evidence do I need to rule out NOT cheating before I punch my buddy?” 95% confidence is used by default for consistency across research, but it isn’t meaningful by itself.
P-values neither mean nor even imply whether or not our two statistical measures are actually different from each other in the real world. In our coins example, the coin is either weighted (probability that it is weighted is 100%) or it isn’t (probability is 0%). Our estimation of the probability of getting heads gets more precise the more data we have. If the coin were weighted to heads, our p-values would become closer to 0 both if the weighting was larger (i.e. more likely to flip heads), but ALSO if we used the same weighted coin and just flipped it more. The weight didn’t change, but our p-value did.
Here’s another way to think about it: the coin in the example was definitely weighted, if only because it’s near impossible to make a fully unweighted coin. To choose whether or not you want to punch your buddy depends both on how much weighting you believe is acceptable AND how sure you want to be. A p-value indirectly informs the latter, but not the former. In almost all cases in health statistics, the “true” values that you are comparing are at least a little different, and you’ll see SOME difference with a large enough sample size. That doesn’t make it meaningful.
P-values can be really useful for testing a lot of things, but in a very limited way, and in a way that is prone to misuse and misinterpretation. Stay tuned for Part II, where we discuss why many stats folks think the p-value has got to go.
The following is the opinion of the author, and does not necessarily reflect scientific findings or theory.
Yesterday, a large group of national science funders across Europe announced that they were making open access mandatory for their funding recipients. That effectively bans nearly a continent’s worth of researchers and their co-authors from publication in traditional paywalled journals, and rapidly hastens movement towards open access models of scientific publishing.
Open access is simply the idea that all people, regardless of who you are, should be able to easily access scientific publications without having to pay fees or jump through hoops. Giving everyone access to scientific publications has the potential to vastly increase collaborative efforts, spread scientific findings, and improve science education. Open access is also inevitable given the power of communications technology. Arguably, we’ve already had open access for years, albeit through a questionably legal science equivalent of Napster. That doesn’t in any way take away from the impact of this announcement, which in many ways forces others to hasten their moves to open access.
Before I move on, I need to be absolutely, all-caps-in-bold, clear about one thing: I AM IN FIRMLY FAVOR OF FULL OPEN ACCESS OF SCIENTIFIC PUBLICATIONS AND DATA, with some generally agreed-on ethical and logistical constraints. However, open access also comes with a few caveats. While some would point to how open access impacts publication funding incentives, the biggest issues may be institutional and cultural. They may even be serious enough to do harm if we in the scientific, media, and popular communities don’t adapt and embrace this change. To understand why, we need to dive a bit into a (slightly fictitious) model of how publication works.
Back in the day, publication was very limited. Scientists scienced, and publishers published. Publications were on physical paper, and almost entirely read and debated within the scientific community. That information would make its way into professional organizations and scientific societies, where it was debated and rehashed, and eventually consensus was synthesized and passed to practitioners. A layperson would almost never come in direct contact with research.
While slow and tedious, this old (and, again, slightly fictitious) model had one feature often taken for granted: consensus was built slowly among a community of experts. That, by no means, made those scientific communities immune to popular whim and often deeply flawed conclusions, but it did provide some insulation, which in turn provided some breathing room for debate and consensus-building. A study isn’t the absolute truth; it’s an argument with data, one which can be overturned, backed up, revised, or rejected. It’s made explicitly to be read by “peers,” by which we mean other scientists in the same field, who are more likely to treat studies with skeptical debate. The traditional “peer review” is really only the first step. The real peer review happens through other people doing their own studies and debating, comparing, rejecting, and sharing them.
Jump cut to today: if someone publishes an article about the “link” between chocolate and Alzheimer’s, that goes almost straight to Twitter, where all opinions are roughly equal, for everyone to see. While I, and hopefully readers of this blog, understand why chocolate studies rarely if ever have any bearing on our lives, most people aren’t privileged to be equipped with the kind of time and education it takes to understand these issues. Science involves complicated theory with conflicting data, and jargon that’s hardly understandable or means something totally different to people on the outside. Science is hard, and it’s a privilege to have the resources and time to understand it. Most do not have that privilege.
Scientific research is increasingly discussed, consumed, used, and abused outside of gated scientific communities, but our institutions and culture are made for a time when they weren’t. That comes with some danger if we fail to adapt. There have always been paths by which popular preference has impacted science, both positively and negatively, and modern communication accelerates tends to both catalyze these processes and bypass some of the checks and balances.
When all publications are public speech, all scientists are public speakers. Public discussion is, and should be a major part of scientist’s jobs, and one which we should embrace. That means adapting scientific culture as well as as institutions to meet these needs, and avoiding some of its pitfalls. We run real risks if we cannot adapt to this environment. Research which is poorly communicated, easily mistranslated, or otherwise misleading can cause real harm, both directly or indirectly. The results of the CLAIMS study are at least partially a result of this new open scientific environment.
If it sounds like I am being vague about what that means in practical terms, it’s because I don’t know. We have ideas on what new models might look like for performing and communicating science, but we won’t know what does and doesn’t work unless we try. And with announcements like this one, it looks like we need to try harder, faster.
Meet Paul the Octopus. Paul is famous. When Paul lived in an aquarium in Germany in 2008, his handlers decided to play a game. They would give him two boxes, one with each team the German European Championship team was playing next, with some food in it. Paul (mostly) predicted the outcomes of the German European Championship matches in 2008 by eating from the box of winning team first, just happening to choose Germany each time. But some doubted Paul. They said he was lucky. So Paul stepped up his game, buckled down, studied up, and waited for his chance in the 2010 World Cup. He correctly predicted every single match the German team played, and then went on to predict the finals between the Netherlands and Spain. Don’t believe me? It’s all on wikipedia.
Unfortunately, Paul has sadly shuffled off this mortal tentacoil, so we can’t do a “real” test of his skills. But we CAN review a few theories on just how Paul, like his namesake, was so prescient.
Theory #1: Paul is really good at predicting football matches
This probably isn’t the blog for you.
Theory #2: Paul (or his handler) loves Germany
In the 2008 matches, Paul chose the box representing his home, Germany, each time, and was mostly right choosing correctly 4 times and wrong 2. In 2010, Paul changed things up. He chose Germany only 5/7 times, and was correct in each instance. So Paul chose Germany 11 times out of 13 (let’s ignore the 2010 finals match, which Germany didn’t play in, for a moment). Maybe Paul’s handlers (who were German, remember) put tastier food in the Germany box. Or maybe Paul prefers the black, red, and gold stripes of the German flag. Who knows? More importantly, who cares when we have a far simpler explanation.
Theory #3: Paul got lucky
Paul got it right 12/14 times (counting the 2010 finals). Let’s assume for a moment that Paul’s prediction is basically a coin flip, and that he just got lucky. How lucky does Paul have to be for this to work? We can predict the probability of getting 12/14 coin flips right using a simple binomial distribution. Assuming that these 12 trials are all independent (we’ll get to that), the probability that Paul would have gotten exactly 12/14 matches right is roughly 0.6%. That’s not great, but that’s not bad either. An easier way to think about that probability is by its inverse. If you wanted one octopus to get 12/14 boxes right by random chance, you would need 180 octopodes. So it’s plausible that Paul got lucky.
Theory #4: Paul got lucky, and we’re bad at understanding uncertainty
#3 works if we had done this before Paul predicted matches. The problem is, we (mostly) didn’t. We know about Paul because he got a little lucky, retroactively. But we don’t know about all the other octopi whose handlers did the same thing, but failed. Most importantly, you don’t know about them because they failed. If you have enough octopuses that know nothing about football, one of them is going to just happen to get it right by chance. The coin that happened to flip the right combination isn’t special just because it happened to flip the right combination. Enough octopuses with typewriters will eventually write an exact copy of 20,000 Leagues under the Sea.
Now, of course, things aren’t really quite that simple, and we’re glossing over some important details. Paul didn’t really have two independent trials, he had two sets of trials (or three, depending if you count the 2010 final separately). He started becoming famous after the 2008 trials. But he didn’t get REALLY famous until 2010.
We can give Paul the benefit of the doubt and say he was learning the rules of the game in 2008 and look only at 2010, in which case the chances of predicting correctly all 8 matches in 2010 is 1/256, slightly more remarkable than our 1/180 above. On the other hand, maybe he truly did have a German flag preference, or the handler helped a little by offering better food in higher chance boxes, which would make Paul be more likely to be correct by non-prescient influence.
To get all 13 of the Germany matches right, you would need 8,192 octopuses (2^13), or 16,384 (2^14) to have one get all 14 matches right if we include the 2010 match. If you had a few other advantages (like color / food preference) that number is lower. Let’s call it 10,000.
Of course, there weren’t 10,000 octopodes picking matches. After all, there are only a few hundred aquariums in the world. But the comparison isn’t just other octopuses. There are thousands of other “low probability” events happening all the time, from other animals and other sports to anything else. We only know about things that happen, and not those that don’t, and tend to think those things that happen are more remarkable than the are.
Even if you repeat the experiment a bunch of times, streaks are random too. If you’ve ever heard of the Sports Illustrated cover jinx, that’s an extension of this problem. You get on the cover because you (randomly) had an anomalously good streak. You have much better chances of getting that streak if you are a better player, but it’s unlikely that you’ll repeat it a second time. You tend to need a LOT of experiments to tease out what is (noise) luck, and what is skill (signal), and even then there is a chance you randomly get misleading results.
Most of the time, this problem is relatively harmless, like our probably not-so-prescient fried Paul here. Sometimes that is deeply harmful.
You’ve seen this before: Person has a deadly metastatic cancer, and is told they have 6 months to live. They take some supplement, and boom, cancer magically cured. Or someone tells you to punch sharks in the nose to avoid being eaten during an attack. Let’s just assume for a moment that all of that is literally, actually true. The problem is simple: you never hear about all the people who took supplements and punched sharks but died anyway. Some people just get lucky.
Most importantly, this happens EVERYWHERE. It’s the main reason why most of those studies that find near miraculous sounding cures for diseases don’t pan out, and why anecdotes make bad evidence, and why you shouldn’t pick your stocks on who made the most money last year. Statisticians aren’t people who make certainty with decisions; we’re people who spend a lot of time understanding and dealing with UNcertainty.
Updates: Corrected 2008 being the European Championship, not the World Cup. Credit to Matthew Rogers for finding this error. Corrected English because I am bad at copy-editing, credit to Dan Larremore.
The following is the opinion of the author, and does not necessarily reflect scientific findings or theory.
A few weeks ago, Alex and I broke down why a study on coffee and its related media articles were misleading. While it might seem obvious that bad studies are bad for our health, the real damage that studies like this do is much deeper, though harder to see and measure. To understand why, we need to start from the obvious.
Direct impact: weak and misleading medical science leads to bad medical decisions
In 2015, two studies came out claiming that they found that statins, drugs typically used for high blood pressure, cause deadly side effects. The papers were both severely misleading, later resulting in retractions to statements in both papers, but before that happened media ran with their claims. A 2016 study led by Anthony Matthews looked at statin prescription and refill rates in the UK, and found compelling evidence that these two studies and their media coverage caused huge disruptions in statin refills and prescriptions, resulting in over 200,000 people ceasing taking their statins for a few months. I have plenty of nits to pick with this study, but the my biggest is that they probably underestimated their estimates of the total impact.
It is remarkably difficult to find the causal effect of weak and misleading causal evidence, but occasionally we get some hints. The example of statins is a particularly dramatic story for which we have the rare privilege of having strong evidence, and you can imagine that this sort of thing happens all the time and goes unmeasured.
Which brings us back to the coffee study in question. You would be right in thinking that coffee studies probably little to no direct harm or help. It’s just coffee. However, you would be wrong to think the problem stops there.
Weak and misleading articles crowd out rigorous ones
That headline space is precious. In principle, every one of those articles could have been about better studies that could be more useful to decision makers. Even better, those media articles could have been written about topics on which there is scientific consensus. Similarly, the time and funding those researchers spent on this misleading coffee article probably could have been put to better scientific use, although worth noting that many of the proposed mechanisms to achieve more intense control of scientific studies would probably do more harm than good.
However, headline space, scientific progress, funding, and consumer exposure are not really zero sum games. Taking away one headline does not automatically mean that it will be replaced with a better one, or replaced with anything at all. Further, while this may be a particularly expensive coffee study due to the genetics aspect, most are dirt cheap. If I had to guess, putting the time and money spent on those to other studies would probably not result in a huge net gain for public health.
The most important and impactful reasons why these studies and their media coverage are damaging are far more subtle, and far more insidious.
Weak and misleading science erodes public trust and discourse in science
As usual, a comedian is the one that described it best: Lewis Black’s late 90’s rant on scientific studies flip-flopping on eggs.
As we showed in CLAIMS, the majority of what people see of health science is weak, misleading, and/or inaccurate. These headlines make up nearly 100% of almost everyone’s exposure to health science. While that represents only a fraction of health science, extremely few are privileged with getting to see the big picture, and most of those people are not writing for mainstream news. If the near entirety of what people see of studies looks like scientists flip-flopping on eggs, it shouldn’t be surprising that trust in scientific institutions is cracking. If people only see the least reliable health science, distrust is a reasonable response.
Unfortunately, many of us were indeed caught by surprise over the last few years as we watched severe backlash against scientific thought and institutions coming from news outlets and political rhetoric. When it is difficult for people to distinguish scientific strength, and people are used to weak science, it allows anyone with sufficient lack of knowledge and/or willingness to take advantage of the situation to more easily reject scientific consensus without cause.
We own this, and we need to fix it
A study like the one we saw a few weeks ago should never have entered into the public sphere, and maybe should not have been done at all. It adds little to nothing of note to our scientific knowledge, misleads health decisions, and continues the erosion of public trust in our institutions. Other studies, such as the statins example, have more immediate consequences.
We have a responsibility as scientists to educate, collaborate with, listen to, and intervene in public discussion. We also have a responsibility encourage our best science, and reject our worst. Sometimes, that means trying things that are uncomfortable and risky.
The family of projects starting with CLAIMS are explicitly intended to be used to intervene and change the interaction of scientific institutions, media, and social media. Many of them are based in critique. Critical scientific review is an unusual thing to be doing at a time when trust in scientific institutions is low, and it makes for some strange (and severely mistaken) bedfellows. It’s a risk which we hope produces net positives for scientific progress and its impact on human lives.
Trying to determine if a study shows causal effects is difficult and time consuming. Most of us don’t have that kind of time or training (yes, that includes almost all medical professionals too). I have a proposed idea for a potential test that anyone can do for any article linking some X to some health Y, and I want to hear your thoughts: count the covariates.
TL;DR: You may be able to get a decent idea of whether or not the study you just saw on social media about X linking to some Y shows a causal relationship by counting the number of covariates needed for the main analysis. The fewer variables controlled for, the more likely the study is to be interperetable as having strong causal inference. The more covariates, the more likely it is to be misleading.
A few important caveats: 1) THIS IS CURRENTLY UNTESTED, but we are currently working on formally testing a pilot of the idea; 2) It will certainly be imperfect, but it might be a good guideline; 3) This is probably only works for studies shared on social media; and 4) This is an idea intended for people who don’t have graduate degrees in epidemiology, econometrics, biostats, etc., but the more you know, the better.
Why it could work:
The key intuition here is twofold. A study that is “controlling” for a lot of variables 1) is usually trying to isolate a causal effect, regardless of the language used; but 2) can’t.
Let’s see why this might work, using that coffee study from last week as an example.
Controlling for a lot of variables implies estimating causal effects
The logic comes down to what it means to “control” for something. For example, smoking. The reason the authors control for smoking is because smoking messes with their estimation of the effect of coffee on mortality. People who drink more coffee also more likely to smoke. Smoking is bad for you. One reason, then, that people who drink more coffee might have different life expediencies is because they are likely to die earlier from smoking. So it makes sense to “control” for smoking then, right?
It does make sense, if you are trying to isolate the effect of drinking coffee on mortality. If you don’t care about that cause effect, and have some other reason to want to know this association, you generally don’t need or want to control for other variable. The more variables you control for, the less plausible it is that you are doing anything other than estimating a causal effect.
Controlling for a lot of variables implies inadequate methods to estimate a causal effect
Some research strategies get you great causal effect estimation without having to control for much of anything at all, such as randomized controlled trials, “natural experiments,” and many kinds of observational data analysis methods in the right scenarios. You can’t always do this successfully. Sometimes, you have to control or adjust for alternative explanations.
The problem is when you have to control for a LOT of alternative explanations. That generally means that there was no “cleaner” way to go about the study that didn’t require controlling for so many variables. That also means that there are probably a thousand other variables that they didn’t control for, or even have the data on those variables to start with. It only takes one uncontrolled for factor to ruin the effect analysis, and there are too many to count. There are also some slightly weirder statistical issues when you imperfectly control for something, and that’s more likely to happen when you are controlling for a lot of stuff.
In that coffee study, the authors controlled for the kitchen sink. However, coffee is related to basically everything we do. People from different cultural backgrounds have different coffee drinking habits. People with different kinds of jobs drink coffee differently. Fitness. Geographic region. Genetics. Social attitudes. You name it, and it is related to coffee. That’s not a problem by itself. What IS a problem is that all of those things ALSO impact how long you are going to live. If you have to control for everything, you can’t.
Count the covariates
To review: controlling for a lot of variables implies that you are looking for a causal effect, but ALSO implies that there is more that needed to be controlled for to actually have estimated a causal effect. See the catch-22?
We can also take a look at causal language here as well. Studies are often considered acceptable in scientific circles (i.e. peer review in journals) as long as they use “technically correct” language with regard to causality. We think that is seriously misleading, but that doesn’t stop those studies from hitting our newsfeeds.
The most likely scenario for most people seeing a study that uses strong causal language and controls for very little is that it’s one of those studies that actually can estimate causality, such as most randomized controlled trials. On the other hand, a study that uses weak causal language and controls for very little probably isn’t actually trying to estimate a causal effect, and our proposed rule doesn’t really say much about whether or not these studies are misleading.
We can also look at the language used, where studies may use stronger (effect/impact/cause) or weaker (association/correlation/link) causal language. It’s also worth considering how the authors state their evidence can be used, as that can also imply that their results are causal. The kinds of studies that control for a lot of variables and state it as such are a strange bunch and unlikely to be seen in your social media news feed. This rule just doesn’t work as well for them, but most are unlikely to see them anyway, so the rule is still mostly ok.
Important considerations and discussion
Multiple specifications can make this hard to deal with. In the phrase “number of covariates required for the main analysis,” there are two tricky words: required and main. Most studies have several ways of going at the same problem, and it’s difficult to determine what the “main” one is. It is common that a study might have both a “controlled” and “uncontrolled” version, which may or may have very different numbers produced. If the numbers don’t change much between those two versions (or, even better, you have the background to know what is required and not), controlling for them probably wasn’t “required,” so they may not need to count. It is notable that the coffee study we keep talking about doesn’t do anything of the kind. All plausible main analyses are heavily controlled, and as such would fail any version and interpretation of this test.
There is probably a paradox that occurs here (credit to Alex Breskin for pointing this out). In the case of multiple studies on the same topic using roughly the same methods, observational trials controlling for more covariates probably do better with regard to causal inference. But because we are not selecting among studies in that way, and we are intending this as a guideline for ALL studies on social media, the opposite may be true.
It is also worth noting that this may end up being mostly indistinguishable from RCT vs. everything else, which is not the intent.
There are also some sets of methods which do require moderate numbers of covariates to work, and occasionally these articles appear in our news feeds. One example from Ellen Moscoe is difference-in-difference studies for causal effects of policies. These typically need controls for time and place, which is at minimum two covariates.
We also just don’t know if this idea actually works. But it might, and we can test it.
Any thoughts on why this might fail? Alternative proposed tests? Let us know in the comments or get in touch!
This week, yet another article about the association between coffee and mortality plastered our social media feeds. This trope is so common that we used it as an example in our post on LSE’s Impact Blog, which happened to be released the very same day this study was published. We helped comment on the study and reporting for a post in Health News Review, which focused on how the media misinterpreted this study. Most news media made unjustifiable claims that suggested that drinking more coffee would increase life expectancy. The media side, however, is only half of the story. The other half is what went wrong on the academic side.
In order to have estimated a causal effect, the researchers would have needed to find a way to account for all possible reasons that people who drink more coffee might have higher/lower mortality that aren’t the direct result of coffee. For example, maybe people who drink a lot of coffee do so because they have to wake up early for work. Since people with jobs tend to be healthier than those who don’t, people who drink coffee may be living longer because they are healthy enough to work. However, this study can’t control for everything, so what they find is an association, but not an association that is useful for people wondering whether they should drink more or less coffee.
The study is very careful to use language which does not technically mean that they found that drinking more coffee causes longer life. That makes them technically correct, because their study is simply incapable of rigorously estimating a causal effect, and they don’t technically claim they do. Unfortunately, in the specific case of this study, hiding behind technically correct language is at least mildly disingenuous. Here is why:
1) The authors implied causation in their methodological approach
The analytic strategy provides key clues was designed to answer a causal question. Remember above where we talked about controlling for alternative explanations? If you are only interested in association (and there might be some reasons why you might want this, albeit a bit contrived), you don’t need to control for alternative explanations. As soon as you start trying to eliminate/control for alternative explanations, you are, by definition, trying to isolate the one effect of interest. This study tries to control for a lot of variables, and by doing so, trying to rule out alternative explanations for the association they found. There is no reason to eliminate “alternatives” unless you are interested in a specific effect.
2) The authors implied causality in their language, even without technically saying so
The authors propose several mechanistic theories for why the association was found, including “reduced inflammation, improved insulin sensitivity, and effects on liver enzyme levels and endothelial function.” Each of those theories implies a causal effect. When interpreting their results, they state that “coffee drinking can be a part of a healthy diet.” Again, that is a conclusion which is only relevant if they were looking at the causal effect coffee on health, which they cannot make. How can you say if coffee is ok to drink if you didn’t tell me anything about the effect of drinking coffee?
3) Alternative purposes of this study are implausible or meaningless
Effect modification by genetics
The stated purpose of the study and its contribution to the literature is about the role of genetics in regulating the impact of coffee on mortality. The problem here, again, is that in order to determine the impact of genetics on regulating the effect of coffee on mortality, you first have to have isolated the effect of coffee on mortality. You can not have “effect modification” without first having an “effect.” That’s a shame, because it is totally plausible that there was some neat genetics science in this study that we aren’t qualified to talk about.
Contribution to a greater literature
In general, we should ignore individual studies, and look at the consensus of evidence that is built up by many studies. However, there are literally hundreds of studies about coffee and mortality, almost all of which commit the exact same errors with regard to causation. One more study that is wrong for the same reason that all the other studies are wrong gives a net contribution of nearly nothing. They may be contributing to the genetics literature, but this study does not add any meaningful evidence to the question of whether or not I should have another coffee.
Studying whether coffee is linked to mortality is inherently a causal question. To pretend otherwise is like a batter missing a swing, and then claiming they didn’t want to hit the ball anyway. Just by conducting this study, a causal effect is implied, but as we already noted this kind of study is not useful for causal inference. This specific issue is unfortunately common for studies in our media feeds, and was one of the reasons we did the CLAIMS study in the first place. We contend that researchers need to be upfront about the fact that they want to estimate causal effects, and to then consider whether or not it is reasonable to do so for the exposures and outcomes they are considering.
We also can not stress enough a more general point: the authors of this study and and the peer review process made a lot of mistakes, but this study does not represent all of of academic research. It is a shame that studies like these are what makes the top headlines time after time again instead of the excellent work done elsewhere.
Can’t we please just accept coffee (and wine and chocolate) for what it is: delicious?
The CLAIMS study took a team of 22 highly skilled people across multiple institutions about 800 person-hours carefully reading, reviewing, rating, and debating papers, not to mention the countless hours spent designing the review tool and protocol, managing the study, doing the analysis, and writing the manuscript. That is a difficult operation in the best of circumstances, but as shown near the top of the published article.
“Funding: The author(s) received no specific funding for this work.”
The entire CLAIMS study was done without any financial support, with all effort and data being donated for free. In part, the lack of funding helps keep us away from possible conflicts of interest, real or imagined, as we criticize ourselves and our peers. However, doing reviews at this scale without funding is a trick we could probably pull off only once.
We designed CLAIMS to stand on its own, but also act as a launching pad for a series of much larger projects. Our next steps – including measuring how and where scientific information is being distorted and designing better tools to do that kind of review at scale – require funding. If we are lucky, CLAIMS and this site will help generate interest in the topic through academic and social media. One constant feature of social media is that people talking about what is wrong with social media. Social media drives media press coverage. Press coverage of scientific studies may improve chances of funding future studies. Our study explores the state of health science at the point of social media consumption. At the very least, that it is no coincidence.
Put another way, CLAIMS is a health science study which critically examines health science in social media, while also designed to itself be consumed in social media to help fund further studies through the same mechanisms causing problems in the first place. Exploring and embracing that irony is one reason why we have this blog. We are experimenting in the intersection of social media and science using our own study, and documenting the process as transparently as possible.
If you are interested in what we are doing, and want to help out in any way, get in touch! We are looking for all kinds of people, whether you are a journalist, a scientist, a social media mogul, or a potential funder.