Friday, November 15, 2024

All together now?

In the 2020 presidential election, Joe Biden got 51.3% of the vote and Donald Trump got 46.8%, for a Democratic lead of 4.5%; in 2024, Kamala Harris got 48.1% and Trump got 50.1%, a Democratic "lead" of -2.  So on the average, Democrats lost ground among voters, but that doesn't mean that they lost across the board:  in certain groups, their vote might have held up better or even increased.  My last post looked at demographic groups; this post will look at states.    


The figure shows the Democratic lead (%Democratic-%Republican) in 2024 compared to 2020.*  The line represents a uniform shift against the Democrats.   The correlation between the leads in 2020 and 2024 is .994 (.993 if you omit DC).  But geographical patterns persist over time, so in order to decide if that's a large correlation, you need a standard of comparison:  is it bigger or smaller than the usual correlation between successive elections?  So I looked at presidential elections since 1972.  The 2020-24 correlation is the largest in that period; the previous record was .993, between the 2016 and 2020 leads.  In fact, the correlation between the 2016 and 2024 leads is .984, which is larger than the correlation between any previous successive pair.   That is, the geographical pattern has been very stable in the three elections where Trump has been a candidate:  Democratic made almost uniform gains in 2020 and suffered almost uniform losses in 2024.  

The degree of stability has increased over time:  1984-88 was the first successive pair to break .9,  1996-2000 was over .95, and 2008-2012 reached .982.  The 2012-16 correlation was lower, at .952, but that was still high by historical standards.  So the basic story in recent elections is one of stability.  Some observers have said that in 2024 the Democrats lost more ground in "blue states" or that their vote held up better in swing states, but I don't see evidence for either of these claims.  There is a statistically significant correlation between state population and 2020-24 change--the Democrats lost more in the bigger states--but this is largely driven by the four biggest states (California, Texas, Florida, and New York), so I'm not sure if there's really anything there.

In the course of doing this analysis, I noticed something unusual.  For thirteen of the fourteen elections considered, their highest correlation was with the immediately preceding or following election.  The exception was 1972--its correlation with the pattern in 1976 was only .42, which was lower than its correlation with any of the other elections.   Its highest correlation was with 1988 (.87) and it has a substantial correlation (.70) with the pattern in 2024.  So in a sense, 1972 seems to have anticipated future elections in terms of the geographic pattern.   It also anticipated the future in another way:  it was was the first election in which college education was associated with Democratic rather than Republican voting.  






*The District of Columbia is not shown because the Democratic lead was so large in both elections.

Saturday, November 9, 2024

Group differences in voting, 2016-2024

 After the 2020 election, I had a post showing how different groups voted in 2016 and 2020, according to Edison exit polls.  This post updates that with information from the 2024 polls as reported by CNN. :

                                         % For Trump

 

                                        2016               2020                    2024

Men                             52%                     49%                  55%   
Women                        41%                     43%                 45%

White                            57%                    57%                 57%
Black                            8%                      12%                13%
Latino/a                      28%                     32%                 46%
Asian-Am.                  27%                     31%                 39%

White Men                 62%                     58%                 60%
White Women            52%                     55%                53%
Black Men                 13%                     18%              21%
Black Women              4%                       8%                   7%
Latino                         32%                     36%                55%
Latina                          25%                     28%                 38%

Age 18-29                   36%                     35%                43%
        30-44                   52%                     55%                48%
        45-65                   52%                     49%                54%    
        65+                      52%                     51%                 49%

Urban                           34%                     37%               38%
Suburban                      49%                     48%                51%
Rural                             61%                     54%               64%

White Coll.                   48%                    49%                45%   
White non-C                 66%                     64%               66%
Non-W coll                   22%                     27%               32%
Non-W non-coll            20%                     26%               34%

under $50,000               41%                     42%               50%
  $50K-49,999               49%                     43%               51%
   $100,000+                  47%                     54%               46%

LGBT                            14%                     28%               13%
Not LGBT                     47%                     48%               53%

Veterans                         60%                    52%               65%     
non-Vets                        44%                     46%               48%

W. Evangelical              80%                    76%              82%
All others                       34%                     37%              40%

Although the exit polls have large samples, the way that they are constructed means that the group estimates can still have fairly large errors, so I focus on the ones that showed a trend over the three elections.   The ones for which I see a trend are in boldface.  Some people talked about his Trump's gains among "minority" voters in 2020.  I was skeptical then, but now I have to agree that there is something going on.  With black voters, the share is still small enough that there's room for doubt, but Trump definitely made gains among Latin and Asian voters.  They may be following the general path of assimilation previously followed by white ethnic groups like Irish Catholics. 

The estimated gender gap among whites was 10% in 2016, 3% in 2020, and 7% in 2024:  that is, despite the Dobbs decision, it didn't change much.  But it did increase among Latins and maybe blacks. Trump has made solid gains among Latinos and black men, smaller gains among Latinas, and smaller and possibly no gains among black women.

Trump made gains with higher income people in 2020, and in 2024 lost ground with them while gaining with low-income people.  That is interesting if it holds up, but overall the differences are small.  Educational differences are large in all elections, and didn't change much.  

 My general assessment is that Trump made gains across the board between 2020 and 2024:  there was not much realignment.  Of course, there's always a good deal of continuity, but I think that the changes between 2016 and 2024 are less striking than the differences between 2012 and 2016.


Friday, November 8, 2024

Inflation and incumbency

 Many people say that inflation was a major cause of the Democratic loss on Tuesday.  But inflation hasn't stopped governing parties from being re-elected in the past.  The figure shows margin in the popular vote and inflation in the previous term.


Under Biden, inflation averaged 5.0% a year, and the Democrats trail in the popular vote by 3.0 (47.7% to 50.7%), as I write this (it will probably get a little closer as more votes are counted in California).  In Richard Nixon's first term, inflation averaged 5.0%, and he won in a landslide.  The correlation between average inflation and margin is just -.22.  

What if people consider the trend--is inflation higher or lower than it was in the past?  I tried average inflation in the administration minus average inflation in the previous administration, and found a stronger relationship:  a regression coefficient of -2.25 with a standard error of 1.04.  After a little experimentation I found an even stronger relationship with inflation in the past year minus average in the previous administration*:  




The correlation is -.65 and the regression coefficient is -2.8 with a standard error of 2.8.  There are two big outliers:  1964 and 1972.  Apart from that, all of the elections are close to the predicted values.  Of course, inflation isn't the only important economic condition.  I added per-capita GDP growth in the first three quarters of the election year (from Ray Fair) and a dummy variable for for an incumbent president running.  The estimates:

Constant        -1.65       (2.92)
relative inf    -1.93        (0.79)
GDP               1.35       (0.65)
Incumbent      5.07        (3.35)

The current value of relative inflation is .92.  The model suggests that if it were zero (ie, the same as inflation under Trump, which averaged about 2%), Trump would still lead, but the gap would be only about half as large.  

The estimated effect of incumbency is large and the predicted value would favor the Democrats if Joe Biden had been the candidate.  The standard error is large, meaning that there's a lot of uncertainty about the size of the incumbency advantage (or even if there is any advantage).  But even if it were smaller, I don't think it should be interpreted as meaning that Biden would have done better than Harris.  A large part of the incumbency advantage comes from the ability to go on TV and speak to the nation--sometimes to try to get support for their policies, but sometimes speaking as leader of the nation rather than leader of a party (e. g., after a natural disaster).  Some presidents have been better at that than others, but they all could do it effectively at least some of the time.  Trump was an exception--it's just not the way he operates.  Biden was also an exception:  he couldn't do it very well, and for much of his term he didn't even try (or if he did, I've forgotten).  Part of that was age, but even in his prime it was a weak point, maybe because he was from a small state where personal relations mattered more than media skills. There have also been general changes that probably reduce the advantage--the fragmentation of the media means a presidential address doesn't reach as many people, and increased partisanship means it's harder to win them over.   


*This is similar to Robert Gordon's "inflation acceleration" and "excess inflation." 

Tuesday, November 5, 2024

Last post before the election

 The polls suggest a very close election, but there is often some systematic polling error--in 2016 and 2020 Republican support was underestimated.  One potentially relevant factor is that there was unusually high turnout in 2020--the rate rose from 60.1% in 2016 to 66% in 2020.  Infrequent voters tend to be less educated, which today means that they tend to vote Republican.  That may have contributed to the underestimation of Republican vote in 2020.  It seems likely that turnout will be lower this time, which reduces the chance that the polls will underestimate the Republican vote and to the extent that they have tried to correct by giving more weight to less educated respondents, makes them more likely to overestimate the Republican vote.  So if I had to guess, I'd say the error is likely to be in the other direction this time--that Harris will run ahead of the polls.  

  The figure shows turnout in 2020 and 2016 by state (from this source):


Turnout increased in every state--the reference line shows a uniform increase.  There are several states that had a relatively large increase, but only one of them is a swing state--Arizona.  

A couple of bonuses: first, an update on questions about confidence in "the wisdom of the American people" in making political decisions or election choices:




Second, an outfit called EON Journals falsely lists me as the editor of one of their journals.  I asked them by e-mail to remove my name, and got no response.  I followed by sending a letter to the address they listed.  I got it back today, with a notice saying "Return to sender/Attemped--Not known/Unable to forward."  So their mailing address is fake too.  

[Some data from the Roper Center for Public Opinion Research]

Thursday, October 31, 2024

Significant at the 32% level

 Nature Human Behavior recently published an article with the title "Underrepresented minority faculty in the USA face a double standard in promotion and tenure decisions."  They also published a comment on it, which said the authors "find double standards negatively applied to scholars of colour, and especially women of colour, even after accounting for scholarly productivity."  Science published a piece on the article, titled "Racial bias can taint the academic tenure process—at one particular point."  So the study must have found strong evidence, right?

Their key findings involved college-level (e. g., Arts & Sciences, Business, Fine Arts) tenure and promotion committees, where black or Hispanic candidates got more negative votes than White or Asian candidates.  I'll look at the probability of getting a unanimous vote of support, where they found the strongest evidence of difference.  In a binary logistic regression with several control variables (university, discipline, number of external grants, time in position, whether the decision involves tenure or promotion to full professor), the estimated effect of URM status is -.581 with a standard error of .246.  That gives a t-ratio of -2.36 and a p-value of .018.  If you prefer odds ratios, it's an estimate of .56 and a 95% confidence interval of .35 to .91.  That's reasonably strong evidence by conventional standards.  

What about scholarly productivity?  They calculated the "h-index," which is based on citation counts,* and standardized it to have a mean of zero and standard deviation of one.  If you add it as a variable:

                 est        se            t-ratio        P
          .375       .129          2.91        .004
URM    -.301       .299          1.01        .315

Now the estimated effect of being a member of a URM is nowhere near statistical significance.  It is still large (the odds ratio is about .75 and the 95% confidence interval goes as low as .41), so the conclusion shouldn't be that there is little or no discrimination--there's just not enough information in the sample to say much either way.**  

But the authors didn't report the previous analysis--I calculated it from the replication data.  They gave an analysis including the H-index, URM status, and the interaction (product) of those variables:

                     est        se            t-ratio        P
H                .330       .128          2.59        .010
URM        -.041       .337          0.12        .903
H*URM    1.414    .492            2.87        .004

That means that H has a different effect for URM and White or Asian (WA) candidates:  for WA, it's .33 and for URM it's .33+1.41=1.74.  The URM coefficient gives the estimated effect of URM status at an H-index of zero.  At other values, of the H-index, the estimated effect of URM status is -.041+1.08*H.  For example, the 10% percentile of the H-index is about -1, so the estimated effect is about -1.12.  The 90th percentile of the H-index is about 1, so the estimated effect of URM status is about 1.04.  That is, URM faculty with below-average H-indexes are have a lower chance of getting unanimous support compared to WA candidates with the same H-index, but URM faculty with above-average H-indexes have a higher chance.  This is a "double standard" in the sense of URM and WA faculty being treated differently, but not in the sense of URM faculty consistently being treated worse.  

The authors describe this as "differential treatment in outcomes is present for URM faculty with a below-average H-index but not for those with an average or above-average H-index."  They suggest an "intriguing question for future research:  do URM faculty with an above average h-index perform better than non-URM faculty with the same h-index?" But the interaction model is symmetrical--what justifies treating the estimate for low h-indexes as a finding and the estimate for higher h-indexes as just an "intriguing possibility"?   You could fit a model in which URM faculty are disadvantaged at lower levels of productivity but there is no difference at moderate or high levels of productivity.  I've done this, and it fits worse than the standard interaction model, although there's not enough information to make a definitive choice between them.  

The result about the interaction between URM status and h-index is interesting, but doesn't support the claim of a general bias against URM faculty.  So why is this study being hyped as strong evidence of bias?  One obvious factor is that many people believe or want to believe that there is a lot of bias in universities, so they'll seize on anything that seems to support this claim.  A second is that people get confused by interaction effects.  But I think there's a third contributing factor:  Nature and Science come from a tradition of scientific reports:  just give your "findings," not a lot of discussion of how you got there and things you tried along the way.  Journals in sociology, political science, and economics come from a more literary tradition--leisurely essays in which people developed their ideas or reviewed previous work.  This tradition continued even after quantitative research appeared:  articles are longer and have more descriptive statistics and more discussion of alternative models.  If this paper had been submitted to a sociology journal, I'm pretty sure some reviewer would have insisted that if you're going to have an interaction model, you also have to show the model with only the main effects of H-index and URM.  That would have made it clear that the data doesn't provide strong evidence of discrimination.  It also might have led to noticing that there's a lot of missing data for the H-index (about 30% of the cases), which is another source of uncertainty.  



*The largest number H for which you have H articles with H or more citations in Google Scholar.  

**This is not the authors' fault--it's hard to collect this kind of data, and as far as I know they are the first to do so. 

bias can taint the academic tenure process—at one particular point

Wednesday, October 23, 2024

Why so close?

 Most recent polls show the presidential race as pretty much neck-and-neck.  Even if the polls have some systematic error, it seems safe to say that the race will be close by historical standards.  How is Donald Trump staying close, despite all his baggage?  I think that the main reason is that most voters remember things as pretty good during the first three years of his administration.  Of course, things changed in the last year, but Trump mostly stepped aside and let state and local governments deal with Covid and the protests after the murder of George Floyd--he offered opinions about what should be done, but didn't make much effort to implement them.  As a result, people give him a pass--if they didn't like what happened, they blame their governor or mayor.  What about his attempt to overturn the results of the 2020 election, culminating in January 6?  Here, the key thing is that Republican elites have stuck with him, saying that there were problems with the election, that Democrats did similar things in the past, or are treating him unfairly now, or are planning to do worse things in the future.  These kind of arguments don't have to persuade to be effective, they just have to muddy the water, so that voters think of the whole issue as "just politics"--a confusing mass of charges and countercharges.   

However, I think that Trump also has a distinct positive--the perception that he says whatever is on his mind.  Since 1988, questions of the form "Do you think that ______ says what he believes most of the time, or does he say what he thinks people want to hear?" have been asked about presidents and presidential candidates.  The table shows the average percent "says what they believe" minus "thinks people want to hear" and the number of surveys that asked about different figures:

                      Avg        N
Trump     19.50     2
McCain     6.29     7
Bradley     5.50     2
GWB     2.00     21
Perot     2.00     2
Obama     0.67     12

Giuliani     -7.00     1
Dukakis     -11.00     1
Dole         -18.00     1
Hillary     -20.40     5
Romney     -21.80     5
Gore     -23.50     8
Kerry     -26.08     12
Bill             -26.63     8
GHWB     -30.00     3

There are substantial differences among them.  Trump has the highest average, and although only two surveys asked about him, the evidence is still pretty strong.  Also, the figures don't just track general popularity--Bill Clinton is the second lowest, even though he was a popular president.  Unfortunately, the question hasn't been asked since 2016, so it's possible that perceptions of Trump have changed (as they did for George W. Bush, who improved over time), but I doubt it.  

This doesn't mean that people believe Trump tells the truth--he has always said a lot of things that are false or ridiculous.  However, most people think that they are pretty good at detecting lies, exaggeration, or just letting off steam.  As a result, they may prefer someone who is unfiltered and frequently untruthful to someone who seems calculated  As Ezra Klein says "What makes Trump Trump . . . [is] the manic charisma born of his disinhibition."

[Data from the Roper Center for Public Opinion Research]





*Of course, with "she" rather than "he" for women.  The earliest (1988) versions had slightly different wording.  

Wednesday, October 16, 2024

A new climate?

 In the years before Trump, most Republican politicians tried to avoid taking a clear position on climate change:  they generally said it was complicated and uncertain, and that more research was needed.  Trump, however, has a clear and consistent position:  it's a hoax.  According to the Trump Twitter Archive, his first mention of climate change on Twitter was in 2012:  "Why is @BarackObama wasting over $70 Billion on 'climate change activities?' Will he ever learn?"  One of his most recent (May 2024):  "So bad that FoxNews puts RFK Jr., considered the dumbest member of the Kennedy Clan, on their fairly conservative platform so much. ... He’s a Radical Left Lunatic whose crazy Climate Change views make the Democrat’s Green New Scam look Conservative."  Has this shift affected public opinion?  In 2019, I had a post on two questions that had been asked since the late 1990s, both of which showed a trend towards thinking that climate change was a serious problem.  One of those, "Do you think that global warming will pose a threat to you or your way of life in your lifetime?", has been asked a few times since then.  The updated results: 


The change may have slowed down, but it doesn't seem to have stopped, and definitely hasn't reversed.

I found two additional questions:  one that has been asked in Pew surveys, "I'd like your opinion about some possible international concerns for the United States. Do you think that each of the following is a major threat, a minor threat, or not a threat to the United States?...Global climate change," and a similar one by Gallup "Next, I am going to read you a list of possible threats to the vital interests of the United States in the next 10 years. For each one, please tell me if you see this as a critical threat, an important but not critical threat, or not an important threat at all.)...Global warming or climate change."  The averages (higher numbers mean greater threat).


Belief that climate change was a threat continued to rise in the Trump administration, but has fallen under Biden.  Why the difference between these questions and "threat to you or your way of life"?  Possibly it's because these refer to international threats and appear in the context of questions about other potential threats.  Greater international turmoil in the Biden administration may may have displaced concern about climate change as a threat to American interests (people aren't limited in the number of things that they can call critical threats, but I think there's some tendency to feel like you should make distinctions).  

There are other questions on climate change, which I may look at in another post.  But at least through 2020, concern with climate change continued to increase.  This may be because despite Trump's strong feelings on the issue, he didn't highlight it to the extent that he did with immigration, tariffs, and charges of election fraud.


[Data from the Roper Center for Public Opinion Research]