In 2019, I wrote about an article in the New York Times "1619 Project," which drew on a paper published in the Proceedings of the National Academy of Sciences in 2016: "Racial Bias in Pain Assessment and Treatment Recommendations, and False Beliefs about Biological Differences between Blacks and Whites." The paper was based on a survey of medical students and residents that gave them hypothetical cases and asked them to rate how much pain they thought the patient would be feeling and what treatment they would recommend for the pain (narcotics vs. something weaker).* According to the Times "when asked to imagine how much pain white or black patients experienced in hypothetical situations, the medical students and residents insisted that black people felt less pain." Actually, the ratings were almost identical--the mean for black cases was 7.622 on a scale of 0-10, and the mean for whites was 7.626--so the description in the Times story was completely wrong. Basically what the study found was that the number of false beliefs was associated with racial bias, but at the average level of false believes there was no bias in either direction.
Last year I saw another Times story that referred to the paper as "an often-cited study," and I checked and found it had about 1400 citations, according to Google Scholar. After the Supreme Court decision on affirmative action, I ran across another article that mentioned it (I forget where that appeared), which led me to check the citation count again. It was up to almost 2000, and is now over 2,000. That's a lot: it ranks 6th out of the couple of thousand papers published in PNAS in 2016. Presumably the 1619 project story helped to bring attention to it, but it was already doing well before then: it had 25 citations in 2016, then 54, 103, and 152 in 2017-9.
I was interested in seeing how well the academic literature did in describing the findings of the paper. Google Scholar lists citing articles roughly in order of the citations that they have, so I started from the top and picked the first 20 with over 100 citations (a couple of books were listed, but I limited myself to journal articles).
One of the citations could be called incidental: “Contemporary, ‘mainstream’ epidemiology’s technocratic focus on individual-level biological and behavioral risk factors”
Four of them were accurate, in my judgment: “a recent study showed that half of medical students and residents in their sample held biased beliefs such as ‘Black people’s skin is thicker than White people’s skin,’ assessed Black mock patients’ pain as lower than White mock patients, and subsequently made less accurate treatment recommendations for Black compared to White mock patients.”
“medical students who endorsed the false beliefs that Black patients had longer nerve endings and thicker skin than White patients also rated Black patients as feeling less pain and offered less accurate treatment recommendations in mock medical cases.”
“in a 2016 study to assess racial attitudes, half of White medical students and residents held unfounded beliefs about intrinsic biologic differences between Black people and White people. These false beliefs were associated with assessments of Black patients’ pain as being less severe than that of White patients and with less appropriate treatment decisions for Black patients.”
“and a substantial number of medical students and trainees hold false beliefs about racial differences.”
Four were partly accurate: “Implicit bias among clinicians and other healthcare workers can . . . contribute to . . . lower quality of care received . . . .”
“document false beliefs among medical students and residents regarding race-based biological differences in pain tolerance that resulted in racial differences in treatment.”
“minorities . . . are less likely to have their pain appropriately diagnosed and effectively treated due to structural constraints, racialized stereotypes, and false beliefs regarding genetic differences on the part of health care providers.”
“contemporary examples of anti-Black racism in healthcare in North America include racial bias in pain assessment and treatment recommendations between White and Black patients based on false beliefs about biological differences"
These correctly say that the study found evidence that views about biological differences were associated with differences in pain assessment and treatment, but fall short because they either imply that the study involved treatment of real cases or that it found differences in average levels of pain assessment.
And one cited this study while describing a completely different one: “Black mothers in the wealthiest neighborhoods in Brooklyn, New York have worse outcomes than white, Hispanic, and Asian mothers in the poorest ones, …. likely due to societal bias that impacts Black women.”
I don't know what's typical, but more than 50% inaccurate citations is disturbing. Another striking thing was that I didn't find any efforts to replicate the study. It had an obvious limitation: the sample was just students and residents at one medical school. So it would be natural to try to replicate it at other medical schools, or among practicing physicians. You could also go beyond straight replication, and do things like consider other hypothetical cases (e. g., ones that were more ambiguous), or the possibility of interactions between race and other factors like gender. The data were from an online survey, so replicating it would be cheap and easy--I could see giving it as a project for a master's student or even an undergraduate. Of course, I can't say that there are no published replications, but I made enough effort to be confident that there aren't many. Is that because of a lack of attempts, or because attempts haven't found anything, so they haven't been published?**
*There was also a survey of Mechanical Turk participants, but that doesn't get much attention.
**The evidence in the original study was weak--there's a good chance that it's just a combination of random variation and what Andrew Gelman calls "researcher degrees of freedom."
Nice work. There are many cases, I am sure, of misinterpreted results becoming canonical. I would bet that a common pattern is for work that is noticed and interpreted in the media, particularly the New York Times, to become installed as “truth” even for many scholars.
ReplyDeleteThank you for discussing this. I teach this article in a sociology class, and it's very discouraging to see how often its results are inaccurately described. We even examine and discuss the raw data, but on exams a nontrivial proportion of students recall the study substantiating clinical outcomes, which it does not. In addition to the issues you note, I'll add that those on the "low" end of the inaccurate-beliefs index showed race bias in the opposite direction--something few comment on--and that the magnitude of the difference is fairly small regardless. It's an interesting study, worthy of further investigation and, of course, real-world applications. I think some of that is going on, and I've seen some work appearing to replicate and extend the results, but the main point remains that the original study is rather frequently misrepresented as showing things it doesn't show. A bigger problem than forking paths and researcher degrees of freedom, I think, is that the article (as with a lot of junky social science) unnecessarily standardizes variables and presents only conditional means, even though the samples were small enough to clearly and accurately represent relationships with the raw data. The raw data communicates the results more clearly and honestly.
ReplyDeleteYou note that this is accurate:
ReplyDelete"in a 2016 study to assess racial attitudes, half of White medical students and residents held unfounded beliefs..."
But really it’s not accurate. Here’s what the paper states:
"About 50% reported that at least one of the false belief items was possibly, probably, or definitely true"
The survey has fifteen items. Eleven of the fifteen are “false belief” items. So the standard here is that one of eleven false belief items was reported as *at least possibly* true. Or, phrasing this differently, its accurate to say:
“The false belief items were identified as possibly, probably, or definitely true at least 4.5% of the time”
HUH???? Five percent???? What happened to half the students having “unfounded beliefs”????
Phrased that way it’s hard to imagine anyone thinking there is rampant racism among these students!
Continued from above:
ReplyDeleteBut what’s up with the four “true belief” items? The implicit assumption in the portrayal of this study is that there are no physiological differences between blacks and whites, but there actually are physiological differences between the two groups. So the mere fact that someone thinks there are differences is not evidence of racism. Moreover, while certain “false belief items” seem implicitly designed to represent false stereotypes of blacks, there’s no physiological reason why they should not be true, given what actually is true.
For example,
False: “Black people's blood coagulates more quickly than whites' “
True: “Blacks have denser, stronger bones than whites“
Without being taught both of these things specifically, given that the second statement is true, it’s not obvious why the first statement would be false, nor why it would be deemed “racist” to at least consider it a possibility. Without a comparison to how people scored on the “true belief” statements, the score on the false belief statements doesn’t necessarily mean anything about racism, it’s just as likely to be an assessment of knowledge. For these two statements, among the first year students, 29% indicated the false statement was possibly, probably, or definitely true, while 25% indicated the true statement was possibly, probably, or definitely true. So if we’re claiming some stereotype that black people are more brutish than white, students’ beliefs about these two statements don’t seem to support that claim. They appear indifferent.
These are students ranging over four years of medical school. Did they learn anything? Judging by the two questions above, they did! Among the THIRD year students, only 3% (vs 29% of first year students) indicated the false statement was possibly, probably, or definitely true; while 40% (vs 25% of first year students) indicated the true statement was possibly, probably, or definitely true. (I ignored the residents’ responses because it’s a small group: only 28 individuals, vs 60+ for the other three groups). That is the trend on every question: the students do better in third year than in first year by quite a bit!
I would not replicate this study. It has too many flaws. at the very least, I would attempt to establish the degree to which the respondents simply lacked knowledge by making the number of accurate statements and inaccurate statements equal and assessing the results. I would also expand to multiple racial groups.
As you observe, even apart from the weakness of the evidence for anti-black discrimination, there are some points that suggest alternative interpretations. When I characterized citations as accurate, I just meant that they weren't factually incorrect, not that I necessarily thought they were good characterizations of the evidence.
ReplyDelete