Thursday, October 31, 2024

Significant at the 32% level

 Nature Human Behavior recently published an article with the title "Underrepresented minority faculty in the USA face a double standard in promotion and tenure decisions."  They also published a comment on it, which said the authors "find double standards negatively applied to scholars of colour, and especially women of colour, even after accounting for scholarly productivity."  Science published a piece on the article, titled "Racial bias can taint the academic tenure process—at one particular point."  So the study must have found strong evidence, right?

Their key findings involved college-level (e. g., Arts & Sciences, Business, Fine Arts) tenure and promotion committees, where black or Hispanic candidates got more negative votes than White or Asian candidates.  I'll look at the probability of getting a unanimous vote of support, where they found the strongest evidence of difference.  In a binary logistic regression with several control variables (university, discipline, number of external grants, time in position, whether the decision involves tenure or promotion to full professor), the estimated effect of URM status is -.581 with a standard error of .246.  That gives a t-ratio of -2.36 and a p-value of .018.  If you prefer odds ratios, it's an estimate of .56 and a 95% confidence interval of .35 to .91.  That's reasonably strong evidence by conventional standards.  

What about scholarly productivity?  They calculated the "h-index," which is based on citation counts,* and standardized it to have a mean of zero and standard deviation of one.  If you add it as a variable:

                 est        se            t-ratio        P
          .375       .129          2.91        .004
URM    -.301       .299          1.01        .315

Now the estimated effect of being a member of a URM is nowhere near statistical significance.  It is still large (the odds ratio is about .75 and the 95% confidence interval goes as low as .41), so the conclusion shouldn't be that there is little or no discrimination--there's just not enough information in the sample to say much either way.**  

But the authors didn't report the previous analysis--I calculated it from the replication data.  They gave an analysis including the H-index, URM status, and the interaction (product) of those variables:

                     est        se            t-ratio        P
H                .330       .128          2.59        .010
URM        -.041       .337          0.12        .903
H*URM    1.414    .492            2.87        .004

That means that H has a different effect for URM and White or Asian (WA) candidates:  for WA, it's .33 and for URM it's .33+1.41=1.74.  The URM coefficient gives the estimated effect of URM status at an H-index of zero.  At other values, of the H-index, the estimated effect of URM status is -.041+1.08*H.  For example, the 10% percentile of the H-index is about -1, so the estimated effect is about -1.12.  The 90th percentile of the H-index is about 1, so the estimated effect of URM status is about 1.04.  That is, URM faculty with below-average H-indexes are have a lower chance of getting unanimous support compared to WA candidates with the same H-index, but URM faculty with above-average H-indexes have a higher chance.  This is a "double standard" in the sense of URM and WA faculty being treated differently, but not in the sense of URM faculty consistently being treated worse.  

The authors describe this as "differential treatment in outcomes is present for URM faculty with a below-average H-index but not for those with an average or above-average H-index."  They suggest an "intriguing question for future research:  do URM faculty with an above average h-index perform better than non-URM faculty with the same h-index?" But the interaction model is symmetrical--what justifies treating the estimate for low h-indexes as a finding and the estimate for higher h-indexes as just an "intriguing possibility"?   You could fit a model in which URM faculty are disadvantaged at lower levels of productivity but there is no difference at moderate or high levels of productivity.  I've done this, and it fits worse than the standard interaction model, although there's not enough information to make a definitive choice between them.  

The result about the interaction between URM status and h-index is interesting, but doesn't support the claim of a general bias against URM faculty.  So why is this study being hyped as strong evidence of bias?  One obvious factor is that many people believe or want to believe that there is a lot of bias in universities, so they'll seize on anything that seems to support this claim.  A second is that people get confused by interaction effects.  But I think there's a third contributing factor:  Nature and Science come from a tradition of scientific reports:  just give your "findings," not a lot of discussion of how you got there and things you tried along the way.  Journals in sociology, political science, and economics come from a more literary tradition--leisurely essays in which people developed their ideas or reviewed previous work.  This tradition continued even after quantitative research appeared:  articles are longer and have more descriptive statistics and more discussion of alternative models.  If this paper had been submitted to a sociology journal, I'm pretty sure some reviewer would have insisted that if you're going to have an interaction model, you also have to show the model with only the main effects of H-index and URM.  That would have made it clear that the data doesn't provide strong evidence of discrimination.  It also might have led to noticing that there's a lot of missing data for the H-index (about 30% of the cases), which is another source of uncertainty.  



*The largest number H for which you have H articles with H or more citations in Google Scholar.  

**This is not the authors' fault--it's hard to collect this kind of data, and as far as I know they are the first to do so. 

bias can taint the academic tenure process—at one particular point

Wednesday, October 23, 2024

Why so close?

 Most recent polls show the presidential race as pretty much neck-and-neck.  Even if the polls have some systematic error, it seems safe to say that the race will be close by historical standards.  How is Donald Trump staying close, despite all his baggage?  I think that the main reason is that most voters remember things as pretty good during the first three years of his administration.  Of course, things changed in the last year, but Trump mostly stepped aside and let state and local governments deal with Covid and the protests after the murder of George Floyd--he offered opinions about what should be done, but didn't make much effort to implement them.  As a result, people give him a pass--if they didn't like what happened, they blame their governor or mayor.  What about his attempt to overturn the results of the 2020 election, culminating in January 6?  Here, the key thing is that Republican elites have stuck with him, saying that there were problems with the election, that Democrats did similar things in the past, or are treating him unfairly now, or are planning to do worse things in the future.  These kind of arguments don't have to persuade to be effective, they just have to muddy the water, so that voters think of the whole issue as "just politics"--a confusing mass of charges and countercharges.   

However, I think that Trump also has a distinct positive--the perception that he says whatever is on his mind.  Since 1988, questions of the form "Do you think that ______ says what he believes most of the time, or does he say what he thinks people want to hear?" have been asked about presidents and presidential candidates.  The table shows the average percent "says what they believe" minus "thinks people want to hear" and the number of surveys that asked about different figures:

                      Avg        N
Trump     19.50     2
McCain     6.29     7
Bradley     5.50     2
GWB     2.00     21
Perot     2.00     2
Obama     0.67     12

Giuliani     -7.00     1
Dukakis     -11.00     1
Dole         -18.00     1
Hillary     -20.40     5
Romney     -21.80     5
Gore     -23.50     8
Kerry     -26.08     12
Bill             -26.63     8
GHWB     -30.00     3

There are substantial differences among them.  Trump has the highest average, and although only two surveys asked about him, the evidence is still pretty strong.  Also, the figures don't just track general popularity--Bill Clinton is the second lowest, even though he was a popular president.  Unfortunately, the question hasn't been asked since 2016, so it's possible that perceptions of Trump have changed (as they did for George W. Bush, who improved over time), but I doubt it.  

This doesn't mean that people believe Trump tells the truth--he has always said a lot of things that are false or ridiculous.  However, most people think that they are pretty good at detecting lies, exaggeration, or just letting off steam.  As a result, they may prefer someone who is unfiltered and frequently untruthful to someone who seems calculated  As Ezra Klein says "What makes Trump Trump . . . [is] the manic charisma born of his disinhibition."

[Data from the Roper Center for Public Opinion Research]





*Of course, with "she" rather than "he" for women.  The earliest (1988) versions had slightly different wording.  

Wednesday, October 16, 2024

A new climate?

 In the years before Trump, most Republican politicians tried to avoid taking a clear position on climate change:  they generally said it was complicated and uncertain, and that more research was needed.  Trump, however, has a clear and consistent position:  it's a hoax.  According to the Trump Twitter Archive, his first mention of climate change on Twitter was in 2012:  "Why is @BarackObama wasting over $70 Billion on 'climate change activities?' Will he ever learn?"  One of his most recent (May 2024):  "So bad that FoxNews puts RFK Jr., considered the dumbest member of the Kennedy Clan, on their fairly conservative platform so much. ... He’s a Radical Left Lunatic whose crazy Climate Change views make the Democrat’s Green New Scam look Conservative."  Has this shift affected public opinion?  In 2019, I had a post on two questions that had been asked since the late 1990s, both of which showed a trend towards thinking that climate change was a serious problem.  One of those, "Do you think that global warming will pose a threat to you or your way of life in your lifetime?", has been asked a few times since then.  The updated results: 


The change may have slowed down, but it doesn't seem to have stopped, and definitely hasn't reversed.

I found two additional questions:  one that has been asked in Pew surveys, "I'd like your opinion about some possible international concerns for the United States. Do you think that each of the following is a major threat, a minor threat, or not a threat to the United States?...Global climate change," and a similar one by Gallup "Next, I am going to read you a list of possible threats to the vital interests of the United States in the next 10 years. For each one, please tell me if you see this as a critical threat, an important but not critical threat, or not an important threat at all.)...Global warming or climate change."  The averages (higher numbers mean greater threat).


Belief that climate change was a threat continued to rise in the Trump administration, but has fallen under Biden.  Why the difference between these questions and "threat to you or your way of life"?  Possibly it's because these refer to international threats and appear in the context of questions about other potential threats.  Greater international turmoil in the Biden administration may may have displaced concern about climate change as a threat to American interests (people aren't limited in the number of things that they can call critical threats, but I think there's some tendency to feel like you should make distinctions).  

There are other questions on climate change, which I may look at in another post.  But at least through 2020, concern with climate change continued to increase.  This may be because despite Trump's strong feelings on the issue, he didn't highlight it to the extent that he did with immigration, tariffs, and charges of election fraud.


[Data from the Roper Center for Public Opinion Research]

Thursday, October 10, 2024

Bonus

 About a month ago, I discovered that I am listed as the editor of the "EON International Journal of Arts, Humanities & Social Sciences."  In fact, I had never even heard of this journal, so I sent an e-mail to them telling me to remove my name.  I never heard back and am still listed as editor, so I decided to take further action.  They give their mailing address as:

 2055, Limestone Rd Ste 200C, Zip Code 19808 Wilmington,
Delaware, USA

I suspect that they are actually based outside the United States, since they clearly aren't familiar with American conventions for writing addresses, but Google Maps shows a building at 2055 Limestone Rd, so I wrote to them.  The concluding sentences in my letter are:

"Falsely claiming that I am the editor of a predatory journal is defamatory.  If you do not remove my name from the listing by October 15, I will consult with my attorney about the possibility of legal action."

Let's see if that has any effect.



Focused on the future, part 3

My last two posts were about answers to a question on confidence that "votes will be accurately cast and counted accurately" in elections, which has been asked a number of times since 2004.  As far as I know, there were no comparable questions before then.  However, a question on "dishonesty in the voting or counting of votes in your district" was asked in 1959 and 1964, and since 2004 there have been several "accurately cast and counted" questions that specified "at the facility where you vote."  I showed the overall results in a previous post,  and will look at party differences in this one.  There's a general tendency for people to be more positive about things that are closer to them, but my question is whether partisan differences in views on local elections might track partisan differences in views on national elections.    Here is average confidence in "the facility where you vote" by party:


It has declined for all groups, although the decline seems smaller for Democrats.  Independents are the least confident, which is probably because they tend to be more suspicious of politics in general.  Comparing confidence in national and local elections for each partisan group (red is local, blue is national):






They changes aren't parallel:  for Democrats and Independents, the gap between confidence in the national and local vote has become smaller; for Republicans, it's become bigger.  The results for Republicans aren't surprising, since their claims of fraud have focused on heavily Democratic places, like Philadelphia, Detroit, and Atlanta.  The general tendency seems to be for confidence in local voting to vary less than confidence in national voting.  

[Data from the Roper Center for Public Opinion Research]

Monday, October 7, 2024

Focused on the future, part 2

 In 2004, Gallup asked "How confident are you that, across the country, the votes for president will be accurately cast and counted in this year’s election – very confident, somewhat confident, not too confident or not at all confident?"  They have repeated the question a number of times, most recently just two weeks ago.  Their report says that the overall level of confidence has stayed about the same since 2008, but with a growing partisan division--Democrats becoming more confident and Republicans less confident.  The report merged "very confident" and "somewhat confident," which is a potentially important distinction, so I calculated the average, which is shown below:


The red dots indicate midterm elections (of course, those questions omitted the words "for president").  There was a substantial decline between 2004 and 2008--there were two surveys in 2004, with an average of about 3.0, two in 2006, with an average of about 2.85, one in 2007, also at 2.85, and two in October 2008, which averaged about 2.65 (about the same as the average in September 2024).   Why would this have happened?  I would have figured that confidence among Democrats would be low in 2004 because of  memories of 2000, and would rise as more time went by (especially after Democratic success in the 2006 midterms).  On the Republican side, it didn't seem like there was anything that should cause a dramatic change.  That would suggest an increase in overall confidence, not a decline.  

Breaking it down by party:



Relatively little change from 2004 to 2007, and then a large decline in Republican confidence between December 2007 and October 2008.  I could only get complete data for two surveys after 2008 but they showed further declines among Republicans.  The next figure shows the gap between Democrats and Republicans:


What might have caused the change in 2007-8?  Thinking back, I remembered that there were news stories about fraud in ACORN voter registration drives.  Also, in December 2007 Hillary Clinton was the frontrunner for the Democratic nomination, so it's possible that the decline among Republicans was a reaction to Obama--maybe his race, or his roots in Chicago politics.   The decline in confidence among Republicans meant that confidence was about the same in both parties.  Unfortunately, there don't seem to be any comparable questions before 2000, so we can't say if the lack of partisan difference was a return to normal.    

[Data from the Roper Center for Public Opinion Research]







Wednesday, October 2, 2024

Focused on the future

During the 2016 election campaign, Donald Trump refused to give a definite answer when asked whether he would accept the results if he lost:  as I recall, his usual response was something like "we'll see what happens."   A Fox News survey from late October of that year asked "If your candidate loses the presidential election in November, will you accept that his or her opponent won fair and square and will be the legitimate leader of the country?"  87% of the people who intended to vote for Hillary Clinton (or were leaning towards Clinton) said that they would; only 56% of those who intended to vote for Trump or were leaning towards Trump said that they would (34% said they would not and 10% weren't sure).  But it may be easier to say that you would be a good loser when you don't expect to lose.  The same survey asked "who do you think will win in November":  64% said Clinton, 26% Trump, and 10% weren't sure.  What if we adjust for expectations?  

Nearly all Clinton supporters expected her to win (93%), so it doesn't make much difference on that side:  for what it's worth, 88% of those who expected her to win and 79% of those who weren't sure or thought she would lose said they would accept Trump as the legitimate leader.  Among Trump supporters, 34% expected Clinton to win, 12% weren't sure, and 55% expected Trump to win.  64% of those who expected Clinton to win, 58% of those who weren't sure, and 51% of those who expected Trump to win said they would accept Clinton as the legitimate leader if she won.  That is, the gap in willingness to accept the other candidate as the legitimate leader is even larger when you adjust for expectations by comparing Clinton supporters who expected to win with Trump supporters who expected to win.  

Of course, the "fair and square . . . legitimate leader" question is open to interpretation:  someone might believe that a candidate had really gotten the votes, but had used unfair tactics.  Since 2004, Gallup has asked about confidence that votes "will be accurately cast and counted in this year’s election."  I'll look at that question in my next post.  

[Data from the Roper Center for Public Opinion Research]