Monday, January 2, 2023

Lost in translation, part 2

 In October, I had a post on miscommunication of research results.  After Andrew Gelman discussed it in his blog, it went on to become my most-viewed post (by a 2:1 margin).  This post gives a few more thoughts on the issue.  

First, a point raised by some comments on Andrew's post.  I quoted a paper that said "the prevalence of psychotic symptoms among mass murderers is much higher than that in the general population (11% v. approximately 0.3-1%)" and added "That is, people with psychotic symptoms were between 10 and 30 times more likely to commit mass murder than people without psychotic symptoms."   How did I get  "between 10 and 30 times more likely"?  The calculation was:  11 is about 10, 10 divided by 1 is 10, and 10 divided by .3 is 33.3, which is about 30.   So I should have said something like "roughly between 10 and 30".  To do the calculation properly, the formula is (p*(1-q))/(q*(1-p)), where p is the prevalence of psychotic symptoms among mass murderers and q is the prevalence in the general public, which gives 12.2 if q is 1% and 41.1 if it's 0.3%.  I didn't bother doing the exact calculation because by the standards of things I usually study, even 10 times more likely is an extremely large difference:  my question was how that came to be described as "dispel[ling] the myth that having a severe psychiatric illness is predictive of who will perpetrate mass murder."

On to the main point:  I suggested that "sometimes a focus on making sure that people don't draw the wrong conclusions comes at the expense of explaining what the research actually found," and that this seemed to be particularly common in medicine and public health.  I mentioned two other cases in which the conclusions reported in news stories were very different from the actual research findings.  That leads to another question:  why were these studies being discussed in the media?  The one on mass shootings was about an issue that had been in the news, but the other two were were just about papers that had been published recently.  Reporters generally write about events--unusual or unexpected things.   With science, this means discoveries or breakthroughs, like the recent fusion experiment.  But in the social sciences, discoveries or breakthroughs are rare (or maybe nonexistent)--if there is progress, it's a slow increase in understanding.  Journalists seem to have figured this out, so although  economists, political scientists, or sociologists are sometimes quoted, there are very few stories that try to summarize a specific piece of research in economics, political science, or sociology.  But in medicine and health, there are sometimes discoveries or breakthroughs, so journalists have the sense that a particular study might be news.  Reporters don't have the time or expertise to read journals and go to conferences and decide for themselves, so universities or journals publicize research in the hope that it will attract media attention.  But that often involves exaggerating the contribution, or making the conclusions sound more startling, or emphasizing something that wasn't the primary focus but which people are interested in.

My conclusion is that journalists should make distinctions between the parts of health and medicine in which discoveries might occur--e. g., a new vaccine--and those which are more like social science.  For example, there have been enough cases of mass murder so that we know something about what kinds of people are more likely to commit it, and no new study is likely to show that everything we think we know is wrong or to reveal that beneath the complex surface there is really one underlying cause.  On a topic like this, journalists should ask researchers to give commentary, but shouldn't be looking for dramatic new discoveries.  

No comments:

Post a Comment