Sunday, December 22, 2024

A tale of two columns, part 1

 In his final column for the New York Times, Paul Krugman talks about a change from when he started writing it in January 2000:    "What strikes me, looking back, is how optimistic many people . . .  were back then and the extent to which that optimism has been replaced by anger and resentment."  He observes that this isn't just dissatisfaction with politics:  "It’s astonishing to look back and see how much more favorably banks were viewed before the financial crisis."  This statement includes a link to Gallup data on confidence in institutions; I've written about them before, but time has passed and more data have accumulated, so this seems like a good occasion to revisit them.

Has there been a general decline in confidence over the 21st century?  The figure shows the average adjusted for changes in the list of institutions they asked about*:

Opinions in 2000 were more favorable than opinions today, but less positive than they'd been in the 1970s.  On the other hand, they were more favorable than they'd been in the early 1990s, so someone looking back in 2000 might have said that there had been a decline, but we've turned things around and are on the way up.  Now it looks like the increase in the 1990s was a temporary interruption in a long decline.  

What could account for the changes?  One possibility is changes in general outlook:  people may have become less "deferential"--less likely to give institutions the benefit of the doubt and assume that their leaders are competent and well-meaning.   You would expect this to be a gradual change depending mostly on generational replacement:  it couldn't account for the upward movement in the 1990s or the rapid decline in the last few years.  What relevant factors might change over a shorter time period?  One possibility is the performance of institutions:  for example, in a column  that appeared a few days after Krugman's, Bret Stephens said "So many things in American life feel broken. Our public schools, which keep getting more money even as they produce worse outcomes."   Another is partisan politics:  political figures influence public opinion because they get a lot of media coverage and because people know whether they are generally aligned with their point of view.  So if prominent politicians criticize an institution, opinion will become more negative.  In principle, there could be equal and opposite effects:  if Republican politicians say negative things about an institution, Republicans will become more negative but Democrats will become more positive.  However, I think the negative effects will generally be stronger:  people who aren't that interested in politics (that is, most people) will just notice that there's a lot of criticism and figure that where there's smoke, there's fire.  

Of course, all of these factors are hard to measure, so it's hard to judge their relative influence, but a look at confidence in particular institutions may provide some hints.  Here is confidence in "big business."  



I'd regard partisan division as pretty much a constant for big business:  Democrats are always more critical and Republicans more favorable, and it's consistently a leading issue.  So the trend can be taken to represent the gradual shift in general outlook.  The ups and downs seem to correspond pretty well to economic conditions (or at least to perceived economic conditions).  

Next, confidence in the public schools and higher education.  For the public schools, there's steady decline, with little short-term variation.  Also, it's a stronger downward trend than for big business (-.011 vs. -.006).  Is that because the performance of schools has been declining steadily?  National Assessment of Education Progress scores improved from the 1970s to 2012--they've declined since then (Stephens links to a report of decline between 2019 and 2023), but are still above the 1970s level.  What about partisan politics?  Although I don't have a measure, I think that partisan divisions have increased.  Schools used to be financed and run primarily at the local level, and both parties were generally favorable to public education.  Over time, Republicans have become more favorable to school choice, local controversies over curricula and libraries have gotten national attention, and the role of federal funding (and regulation) has increased.  So I would attribute the larger decline for schools, compared to big business, to an increased role for partisan politics.  There are only a few years of data for higher education, but the rate of decline is even larger than for the public schools.  Universities are very slow-moving institutions, so it's safe to say that their performance didn't change much between 2015 and 2024.  But I think that partisan controversy over universities has definitely increased.


Gallup asks about a lot of institutions, so I won't consider them all individually.  But there are some exceptions to the general pattern of decline--I'll turn to them in my next post.  

*That is, the year effects from a model in which confidence in a particular institution in a particular year is the sum of a year effect and an institution effect.

Friday, December 13, 2024

Mood indigo

It's now pretty widely agreed that schools were too slow to return to in-person instruction during the Covid epidemic: "remote learning" usually meant less learning and students suffered from the loss of normal social interaction.   So why didn't the schools go back faster? Some observers hold that cautious  policies were imposed by what Nate Silver calls the "Indigo Blob":  "the merger between formerly nonpartisan institutions like the media, academia and public health . . . and instruments of the Democratic party and progressive advocacy groups."  

There are a couple of problems with this analysis.  One is that general public opinion was not in favor of faster reopening.  In April 2021 an NBC News poll asked people who had children in school "do you believe that your child's school system has been too slow in re-opening, too fast in re-opening, or struck the right balance?"  14% said too slow, 14% too fast, and 70% struck the right balance.   That's an impressively high level of public agreement with policy, which may be because policies responded to local opinion or because people generally have a positive view of their local schools and trusted them to do the right thing.  The second is that opinions on the issue were not closely related to education.  In January 2022, a Fox News survey asked "thinking about the winter school term, do you think your local public schools should reopen fully in-person as usual, open in-person with social distancing and masks, combine in-person and remote learning, or be fully remote":  Compared to white college graduates, white people who didn't have a college degree were more likely to favor full in-person reopening (38% vs. 24%), but also more likely to favor fully remote education (12% to 10%).   So education was a factor, but the differences weren't large compared to race (32% of whites favored fully reopening,  11% favored fully remote; only 6% of blacks favored fully reopening in-person and 30% favored fully remote, and Hispanics were about midway in between).  Age also made a substantial difference:  among people under 35, 18% favored reopening as usual and 21% favored going completely online; among people over 65, 34% favored reopening as usual and only 6% completely online.  Two factors that might have been expected to make a difference but didn't were parent/non-parent status and gender.  

Returning to the question of why schools didn't go back to in-person instruction more quickly, I'd say that it was because decision-makers were generally aligned with public opinion--the idea that children need special protection has a lot of intuitive appeal, so in the presence of uncertainty they were inclined to play it safe.  Of course, there were also large partisan differences (see this post), but I don't think that these appeared because Democrats followed the "Indigo Blob"--it was because they reacted against Trump.

[Data from the Roper Center for Public Opinion Research]


Thursday, December 5, 2024

Now what's the matter with Kansas?

 A few weeks ago, I had a post on the geographical pattern of party support in the 2020 and 2024 elections at the state level.  By historical standards, it was very similar:  that is, the Republicans gained by about the same amount in all states.  But it wasn't exactly the same, and there have been some reports of large shifts at the county level, so in this post I'll take a closer look.*  In 2020, Joe Biden got 51.3% of the vote and Donald Trump got 46.8%, for a Democratic lead of 4.5; in 2024, Kamala Harris got 48.2% and Trump got 49.8%, for a "lead" of -1.6.  That means that the difference in leads is 4.5+1.6=6.1. (You could also call that a swing of 3.05%, but I'll talk about the difference in leads).  At the county level, the mean Republican gain was 3.5.  Since that is smaller than the Republican gain in total votes, that means that Republicans gained more in larger counties.  The twenty counties with the largest Republican gains included three with populations over 1,000,000 (Miami-Dade, The Bronx, and Queens).  In contrast, only one of the twenty counties with the largest Democratic gains had a population of over 250,000, and three of them were under 1,000 (one of them was Loving County, Texas, where the Democratic vote surged from 4 out of 66 in 2020 to 10 out of 97 in 2024).  In a regression of Republican gain on the log of population, the estimated coefficient is .27 with a standard error of .039.  

When controls for share of the population that is Latin, Black, and Asian are added, the estimate for log population drops to .02 with a standard error of .04.  The estimates for shares of Latin and Black population are positive, with t-ratios of over 10; the estimate for Asian share is also positive, with a t-ratio of about 2.5.  Finally, if you add indicator variables for the states, the estimate for log of population is -.2 with a standard error of .04; the estimates for share of Latin, Black, and Asian all remain positive, although the t-ratio for the share of Asian drops to 1.8.  

The figure shows the average county-level Republican gain by state without controls on the horizontal axis, and with the controls on the vertical axis (the zero point is Alabama, just because it's alphabetically the first state):

There does seem to be a a pattern in the relative shifts:  Massachusetts and New Jersey had two of the largest pro-Republican shifts, and New York and Rhode Island were also pretty large (the data aren't quite complete, and Connecticut is missing).  On the other side, Utah, Colorado, Kansas, and Oklahoma all had relatively small pro-Republican shifts. I'm not sure whether there's anything those states have in common apart from being in the same general part of the country, but it's worth thinking about.  I wonder if some of it is a reversal of the 2012-16 shifts:  that is, a return to the pre-Trump pattern?

*Data are from   https://github.com/tonmcg/US_County_Level_Election_Results_08-24

Monday, December 2, 2024

The three I's

In my view, the 2024 election was primarily a judgment on the Biden administration's record.  There always is a retrospective element in elections, and it was particularly strong this time since Kamala Harris didn't have much time to establish a distinct identity.  The biggest negatives included inflation, international affairs, and immigration.  The impact of inflation is often exaggerated--for example, some observers claim that people were upset because prices hadn't returned to pre-inflation levels--but it was certainly a factor (see this post for an estimate).  On international affairs, it wasn't primarily disagreement with administration policy, but just the fact that the Ukraine and Gaza conflicts were happening under Biden and nothing comparable had happened under Trump.  On immigration, I have some data:  survey questions asking which party will do a better job dealing with immigration.  There are some variations in question wording--for example, some explicitly offer a "no difference" option--but they don't seem to have much affect on the results.  The figure shows the percent saying the Democrats are better minus the percent saying the Republicans are better.  

A few of the questions asked about "illegal immigration"--they are the red dots.  Republicans generally do better on the illegal immigration questions, but there aren't enough to say much about the trend.  For the general immigration question, Republicans had a big advantage the first two times it was asked (1996 and early 2002), but the Democrats generally had an advantage from 2005 on.  Their average was +3 under Bush, +1 under Obama, and +8 under Trump.  The general immigration question was asked twelve times during the Trump administration, and the Democrats led every time.  That changed under Biden:  Republicans led all four times  the question was asked, and in 2022 and 2023 the Republican lead was about as big as it had been in 1996 and 2002.  Back in 2017, I said that a majority was in favor of letting unauthorized immigrants who were currently here stay and possibly obtain citizenship, but also in favor of stronger efforts to prevent further illegal immigration.  That may help to explain the change under Biden:  a "path to citizenship" had been a major issue under Obama and Bush, but disappeared under Biden since it was clear that Republicans wouldn't cooperate.  That meant that attention was focused on the border, where the Republicans had the advantage.  But that's not the whole story:  in 2022 and 2023, the Democratic disadvantage on "immigration" was bigger than their disadvantage of "illegal immigration" had been ten years earlier.

[Data from the Roper Center for Public Opinion Research]

Sunday, November 24, 2024

Make Trump less terrible again?

 There have been a number of questions of the form "what kind of president X will be [has been]:  great, good, average, poor, or terrible."  I calculated a net score for Donald Trump, with great +2, good +1..... terrible -2."  


The first time it was asked was in April 2011, when there was some talk of him running for president; the second was in January 2016, when he was a contender but not yet the clear favorite.   It was asked again in March, August, and October 2016, when he was the favorite and then the Republican nominee.   Then there was one in December 2012, after he was elected but before he took office, another in June 2020, and two in early 2021 (Jan/Feb and March).  Finally, there were two during the 2024 campaign.  Basically, there seems to have been a drop as people got to know him during the 2016 campaign, but a pretty steady rise after that point.  January 6 doesn't seem to have hurt him--his score in late January/early February 2021 was a little higher than it had been in June 2020.  But then there is the exceptionally favorable assessment in December 2020--still more negative than positive, but only slightly.  What explains it?

Here is the percent saying that he will be (or was) a great president:


That's a pretty steady increase as he solidified his position among Republicans (although it's worth noting that his highest figure is still below the 23% who rated Barack Obama as great in December 2016).  


Here, the December 2016 survey stands out, with an unusually low number saying that he would be terrible.  Apart from that, it didn't vary much between 2016 and 2024.  

I think that the drop in "terrible" ratings in December 2016 reflects "diffuse support" for the political process:  people were willing to put their doubts aside and give the newly elected president a chance.  Unfortunately no parallel question was asked for Biden immediately after his election in 2020.  Hopefully it will be asked again for Trump in the next month or so.  I would guess that the effect has weakened or disappeared, partly because people know him better, and partly because general support for the process has become weaker:  that is, the share who expect him to be terrible would be about the same as during the campaign.

[Data from the Roper Center for Public Opinion Research]



Thursday, November 21, 2024

Public opinion on immigration: the Trump and Biden years

 Andrew Gelman recently reposted something I wrote on immigration in 2016.  That reminded me that I should update another old post on immigration, which summarized answers to a question on whether immigration should be "kept at its present level, increased, or decreased."  Opinion had shifted towards "decreased" until the mid-1990s, but steadily moved towards "increased" after that.  As of 2016, "decreased" was still more common than "increased," but it was getting close.  Here is the updated figure:



In 2020, the balance was on the side of increased for the first time (34% increased, 36% present level, 28% decreased), but that was followed by four years of shifts against immigration, so opinions are now about where they were in 2002-4.  

On some issues, opinions shift with party control of the presidency.  Sometimes there is a general "thermostatic" movement against the administration's policy, or what people perceive as its policy:  when a Democrat is in office, people shift away from support for spending on social programs; when a Republican is in office, they shift towards more support.  There's no sign of that here.  Sometimes there are shifts that differ by party:  people see conditions as worse when the other party is in power.  That is, a Republican might think that levels of immigration were OK under Trump but out of control under Biden, and shift from "present level" to "decreased."  I got the breakdowns by party for some of the recent surveys:


People of all parties have generally moved in the same direction at the same time.  There was a substantial increase in the gap between parties in 2014-19, but only a slight increase since then.  To put things in another way, during Biden's term both Democrats and Republicans moved towards saying that immigration should be decreased, and the movement was only slightly larger among Republicans.  That leads to a question of what they were reacting to.  Most people don't have much personal experience that would help them in judging levels of immigration; but to the extent that views reflect news coverage, you'd expect growing divergence based on differences in coverage between mainstream and conservative media.  

[Data from the Roper Center for Public Opinion Research]

 


Friday, November 15, 2024

All together now?

In the 2020 presidential election, Joe Biden got 51.3% of the vote and Donald Trump got 46.8%, for a Democratic lead of 4.5%; in 2024, Kamala Harris got 48.1% and Trump got 50.1%, a Democratic "lead" of -2.  So on the average, Democrats lost ground among voters, but that doesn't mean that they lost across the board:  in certain groups, their vote might have held up better or even increased.  My last post looked at demographic groups; this post will look at states.    


The figure shows the Democratic lead (%Democratic-%Republican) in 2024 compared to 2020.*  The line represents a uniform shift against the Democrats.   The correlation between the leads in 2020 and 2024 is .994 (.993 if you omit DC).  But geographical patterns persist over time, so in order to decide if that's a large correlation, you need a standard of comparison:  is it bigger or smaller than the usual correlation between successive elections?  So I looked at presidential elections since 1972.  The 2020-24 correlation is the largest in that period; the previous record was .993, between the 2016 and 2020 leads.  In fact, the correlation between the 2016 and 2024 leads is .984, which is larger than the correlation between any previous successive pair.   That is, the geographical pattern has been very stable in the three elections where Trump has been a candidate:  Democratic made almost uniform gains in 2020 and suffered almost uniform losses in 2024.  

The degree of stability has increased over time:  1984-88 was the first successive pair to break .9,  1996-2000 was over .95, and 2008-2012 reached .982.  The 2012-16 correlation was lower, at .952, but that was still high by historical standards.  So the basic story in recent elections is one of stability.  Some observers have said that in 2024 the Democrats lost more ground in "blue states" or that their vote held up better in swing states, but I don't see evidence for either of these claims.  There is a statistically significant correlation between state population and 2020-24 change--the Democrats lost more in the bigger states--but this is largely driven by the four biggest states (California, Texas, Florida, and New York), so I'm not sure if there's really anything there.

In the course of doing this analysis, I noticed something unusual.  For thirteen of the fourteen elections considered, their highest correlation was with the immediately preceding or following election.  The exception was 1972--its correlation with the pattern in 1976 was only .42, which was lower than its correlation with any of the other elections.   Its highest correlation was with 1988 (.87) and it has a substantial correlation (.70) with the pattern in 2024.  So in a sense, 1972 seems to have anticipated future elections in terms of the geographic pattern.   It also anticipated the future in another way:  it was was the first election in which college education was associated with Democratic rather than Republican voting.  






*The District of Columbia is not shown because the Democratic lead was so large in both elections.

Saturday, November 9, 2024

Group differences in voting, 2016-2024

 After the 2020 election, I had a post showing how different groups voted in 2016 and 2020, according to Edison exit polls.  This post updates that with information from the 2024 polls as reported by CNN. :

                                         % For Trump

 

                                        2016               2020                    2024

Men                             52%                     49%                  55%   
Women                        41%                     43%                 45%

White                            57%                    57%                 57%
Black                            8%                      12%                13%
Latino/a                      28%                     32%                 46%
Asian-Am.                  27%                     31%                 39%

White Men                 62%                     58%                 60%
White Women            52%                     55%                53%
Black Men                 13%                     18%              21%
Black Women              4%                       8%                   7%
Latino                         32%                     36%                55%
Latina                          25%                     28%                 38%

Age 18-29                   36%                     35%                43%
        30-44                   52%                     55%                48%
        45-65                   52%                     49%                54%    
        65+                      52%                     51%                 49%

Urban                           34%                     37%               38%
Suburban                      49%                     48%                51%
Rural                             61%                     54%               64%

White Coll.                   48%                    49%                45%   
White non-C                 66%                     64%               66%
Non-W coll                   22%                     27%               32%
Non-W non-coll            20%                     26%               34%

under $50,000               41%                     42%               50%
  $50K-49,999               49%                     43%               51%
   $100,000+                  47%                     54%               46%

LGBT                            14%                     28%               13%
Not LGBT                     47%                     48%               53%

Veterans                         60%                    52%               65%     
non-Vets                        44%                     46%               48%

W. Evangelical              80%                    76%              82%
All others                       34%                     37%              40%

Although the exit polls have large samples, the way that they are constructed means that the group estimates can still have fairly large errors, so I focus on the ones that showed a trend over the three elections.   The ones for which I see a trend are in boldface.  Some people talked about his Trump's gains among "minority" voters in 2020.  I was skeptical then, but now I have to agree that there is something going on.  With black voters, the share is still small enough that there's room for doubt, but Trump definitely made gains among Latin and Asian voters.  They may be following the general path of assimilation previously followed by white ethnic groups like Irish Catholics. 

The estimated gender gap among whites was 10% in 2016, 3% in 2020, and 7% in 2024:  that is, despite the Dobbs decision, it didn't change much.  But it did increase among Latins and maybe blacks. Trump has made solid gains among Latinos and black men, smaller gains among Latinas, and smaller and possibly no gains among black women.

Trump made gains with higher income people in 2020, and in 2024 lost ground with them while gaining with low-income people.  That is interesting if it holds up, but overall the differences are small.  Educational differences are large in all elections, and didn't change much.  

 My general assessment is that Trump made gains across the board between 2020 and 2024:  there was not much realignment.  Of course, there's always a good deal of continuity, but I think that the changes between 2016 and 2024 are less striking than the differences between 2012 and 2016.


Friday, November 8, 2024

Inflation and incumbency

 Many people say that inflation was a major cause of the Democratic loss on Tuesday.  But inflation hasn't stopped governing parties from being re-elected in the past.  The figure shows margin in the popular vote and inflation in the previous term.


Under Biden, inflation averaged 5.0% a year, and the Democrats trail in the popular vote by 3.0 (47.7% to 50.7%), as I write this (it will probably get a little closer as more votes are counted in California).  In Richard Nixon's first term, inflation averaged 5.0%, and he won in a landslide.  The correlation between average inflation and margin is just -.22.  

What if people consider the trend--is inflation higher or lower than it was in the past?  I tried average inflation in the administration minus average inflation in the previous administration, and found a stronger relationship:  a regression coefficient of -2.25 with a standard error of 1.04.  After a little experimentation I found an even stronger relationship with inflation in the past year minus average in the previous administration*:  




The correlation is -.65 and the regression coefficient is -2.8 with a standard error of 1.0.  There are two big outliers:  1964 and 1972.  Apart from that, all of the elections are close to the predicted values.  Of course, inflation isn't the only important economic condition.  I added per-capita GDP growth in the first three quarters of the election year (from Ray Fair) and a dummy variable for for an incumbent president running.  The estimates:

Constant        -1.65       (2.92)
relative inf    -1.93        (0.79)
GDP               1.35       (0.65)
Incumbent      5.07        (3.35)

The current value of relative inflation is .92.  The model suggests that if it were zero (ie, the same as inflation under Trump, which averaged about 2%), Trump would still lead, but the gap would be only about half as large.  

The estimated effect of incumbency is large and the predicted value would favor the Democrats if Joe Biden had been the candidate.  The standard error is large, meaning that there's a lot of uncertainty about the size of the incumbency advantage (or even if there is any advantage).  But even if it were smaller, I don't think it should be interpreted as meaning that Biden would have done better than Harris.  A large part of the incumbency advantage comes from the ability to go on TV and speak to the nation--sometimes to try to get support for their policies, but sometimes speaking as leader of the nation rather than leader of a party (e. g., after a natural disaster).  Some presidents have been better at that than others, but they all could do it effectively at least some of the time.  Trump was an exception--it's just not the way he operates.  Biden was also an exception:  he couldn't do it very well, and for much of his term he didn't even try (or if he did, I've forgotten).  Part of that was age, but even in his prime it was a weak point, maybe because he was from a small state where personal relations mattered more than media skills. There have also been general changes that probably reduce the advantage--the fragmentation of the media means a presidential address doesn't reach as many people, and increased partisanship means it's harder to win them over.   


*This is similar to Robert Gordon's "inflation acceleration" and "excess inflation." 

Tuesday, November 5, 2024

Last post before the election

 The polls suggest a very close election, but there is often some systematic polling error--in 2016 and 2020 Republican support was underestimated.  One potentially relevant factor is that there was unusually high turnout in 2020--the rate rose from 60.1% in 2016 to 66% in 2020.  Infrequent voters tend to be less educated, which today means that they tend to vote Republican.  That may have contributed to the underestimation of Republican vote in 2020.  It seems likely that turnout will be lower this time, which reduces the chance that the polls will underestimate the Republican vote and to the extent that they have tried to correct by giving more weight to less educated respondents, makes them more likely to overestimate the Republican vote.  So if I had to guess, I'd say the error is likely to be in the other direction this time--that Harris will run ahead of the polls.  

  The figure shows turnout in 2020 and 2016 by state (from this source):


Turnout increased in every state--the reference line shows a uniform increase.  There are several states that had a relatively large increase, but only one of them is a swing state--Arizona.  

A couple of bonuses: first, an update on questions about confidence in "the wisdom of the American people" in making political decisions or election choices:




Second, an outfit called EON Journals falsely lists me as the editor of one of their journals.  I asked them by e-mail to remove my name, and got no response.  I followed by sending a letter to the address they listed.  I got it back today, with a notice saying "Return to sender/Attemped--Not known/Unable to forward."  So their mailing address is fake too.  

[Some data from the Roper Center for Public Opinion Research]

Thursday, October 31, 2024

Significant at the 32% level

 Nature Human Behavior recently published an article with the title "Underrepresented minority faculty in the USA face a double standard in promotion and tenure decisions."  They also published a comment on it, which said the authors "find double standards negatively applied to scholars of colour, and especially women of colour, even after accounting for scholarly productivity."  Science published a piece on the article, titled "Racial bias can taint the academic tenure process—at one particular point."  So the study must have found strong evidence, right?

Their key findings involved college-level (e. g., Arts & Sciences, Business, Fine Arts) tenure and promotion committees, where black or Hispanic candidates got more negative votes than White or Asian candidates.  I'll look at the probability of getting a unanimous vote of support, where they found the strongest evidence of difference.  In a binary logistic regression with several control variables (university, discipline, number of external grants, time in position, whether the decision involves tenure or promotion to full professor), the estimated effect of URM status is -.581 with a standard error of .246.  That gives a t-ratio of -2.36 and a p-value of .018.  If you prefer odds ratios, it's an estimate of .56 and a 95% confidence interval of .35 to .91.  That's reasonably strong evidence by conventional standards.  

What about scholarly productivity?  They calculated the "h-index," which is based on citation counts,* and standardized it to have a mean of zero and standard deviation of one.  If you add it as a variable:

                 est        se            t-ratio        P
          .375       .129          2.91        .004
URM    -.301       .299          1.01        .315

Now the estimated effect of being a member of a URM is nowhere near statistical significance.  It is still large (the odds ratio is about .75 and the 95% confidence interval goes as low as .41), so the conclusion shouldn't be that there is little or no discrimination--there's just not enough information in the sample to say much either way.**  

But the authors didn't report the previous analysis--I calculated it from the replication data.  They gave an analysis including the H-index, URM status, and the interaction (product) of those variables:

                     est        se            t-ratio        P
H                .330       .128          2.59        .010
URM        -.041       .337          0.12        .903
H*URM    1.414    .492            2.87        .004

That means that H has a different effect for URM and White or Asian (WA) candidates:  for WA, it's .33 and for URM it's .33+1.41=1.74.  The URM coefficient gives the estimated effect of URM status at an H-index of zero.  At other values, of the H-index, the estimated effect of URM status is -.041+1.08*H.  For example, the 10% percentile of the H-index is about -1, so the estimated effect is about -1.12.  The 90th percentile of the H-index is about 1, so the estimated effect of URM status is about 1.04.  That is, URM faculty with below-average H-indexes are have a lower chance of getting unanimous support compared to WA candidates with the same H-index, but URM faculty with above-average H-indexes have a higher chance.  This is a "double standard" in the sense of URM and WA faculty being treated differently, but not in the sense of URM faculty consistently being treated worse.  

The authors describe this as "differential treatment in outcomes is present for URM faculty with a below-average H-index but not for those with an average or above-average H-index."  They suggest an "intriguing question for future research:  do URM faculty with an above average h-index perform better than non-URM faculty with the same h-index?" But the interaction model is symmetrical--what justifies treating the estimate for low h-indexes as a finding and the estimate for higher h-indexes as just an "intriguing possibility"?   You could fit a model in which URM faculty are disadvantaged at lower levels of productivity but there is no difference at moderate or high levels of productivity.  I've done this, and it fits worse than the standard interaction model, although there's not enough information to make a definitive choice between them.  

The result about the interaction between URM status and h-index is interesting, but doesn't support the claim of a general bias against URM faculty.  So why is this study being hyped as strong evidence of bias?  One obvious factor is that many people believe or want to believe that there is a lot of bias in universities, so they'll seize on anything that seems to support this claim.  A second is that people get confused by interaction effects.  But I think there's a third contributing factor:  Nature and Science come from a tradition of scientific reports:  just give your "findings," not a lot of discussion of how you got there and things you tried along the way.  Journals in sociology, political science, and economics come from a more literary tradition--leisurely essays in which people developed their ideas or reviewed previous work.  This tradition continued even after quantitative research appeared:  articles are longer and have more descriptive statistics and more discussion of alternative models.  If this paper had been submitted to a sociology journal, I'm pretty sure some reviewer would have insisted that if you're going to have an interaction model, you also have to show the model with only the main effects of H-index and URM.  That would have made it clear that the data doesn't provide strong evidence of discrimination.  It also might have led to noticing that there's a lot of missing data for the H-index (about 30% of the cases), which is another source of uncertainty.  



*The largest number H for which you have H articles with H or more citations in Google Scholar.  

**This is not the authors' fault--it's hard to collect this kind of data, and as far as I know they are the first to do so. 

bias can taint the academic tenure process—at one particular point

Wednesday, October 23, 2024

Why so close?

 Most recent polls show the presidential race as pretty much neck-and-neck.  Even if the polls have some systematic error, it seems safe to say that the race will be close by historical standards.  How is Donald Trump staying close, despite all his baggage?  I think that the main reason is that most voters remember things as pretty good during the first three years of his administration.  Of course, things changed in the last year, but Trump mostly stepped aside and let state and local governments deal with Covid and the protests after the murder of George Floyd--he offered opinions about what should be done, but didn't make much effort to implement them.  As a result, people give him a pass--if they didn't like what happened, they blame their governor or mayor.  What about his attempt to overturn the results of the 2020 election, culminating in January 6?  Here, the key thing is that Republican elites have stuck with him, saying that there were problems with the election, that Democrats did similar things in the past, or are treating him unfairly now, or are planning to do worse things in the future.  These kind of arguments don't have to persuade to be effective, they just have to muddy the water, so that voters think of the whole issue as "just politics"--a confusing mass of charges and countercharges.   

However, I think that Trump also has a distinct positive--the perception that he says whatever is on his mind.  Since 1988, questions of the form "Do you think that ______ says what he believes most of the time, or does he say what he thinks people want to hear?" have been asked about presidents and presidential candidates.  The table shows the average percent "says what they believe" minus "thinks people want to hear" and the number of surveys that asked about different figures:

                      Avg        N
Trump     19.50     2
McCain     6.29     7
Bradley     5.50     2
GWB     2.00     21
Perot     2.00     2
Obama     0.67     12

Giuliani     -7.00     1
Dukakis     -11.00     1
Dole         -18.00     1
Hillary     -20.40     5
Romney     -21.80     5
Gore     -23.50     8
Kerry     -26.08     12
Bill             -26.63     8
GHWB     -30.00     3

There are substantial differences among them.  Trump has the highest average, and although only two surveys asked about him, the evidence is still pretty strong.  Also, the figures don't just track general popularity--Bill Clinton is the second lowest, even though he was a popular president.  Unfortunately, the question hasn't been asked since 2016, so it's possible that perceptions of Trump have changed (as they did for George W. Bush, who improved over time), but I doubt it.  

This doesn't mean that people believe Trump tells the truth--he has always said a lot of things that are false or ridiculous.  However, most people think that they are pretty good at detecting lies, exaggeration, or just letting off steam.  As a result, they may prefer someone who is unfiltered and frequently untruthful to someone who seems calculated  As Ezra Klein says "What makes Trump Trump . . . [is] the manic charisma born of his disinhibition."

[Data from the Roper Center for Public Opinion Research]





*Of course, with "she" rather than "he" for women.  The earliest (1988) versions had slightly different wording.  

Wednesday, October 16, 2024

A new climate?

 In the years before Trump, most Republican politicians tried to avoid taking a clear position on climate change:  they generally said it was complicated and uncertain, and that more research was needed.  Trump, however, has a clear and consistent position:  it's a hoax.  According to the Trump Twitter Archive, his first mention of climate change on Twitter was in 2012:  "Why is @BarackObama wasting over $70 Billion on 'climate change activities?' Will he ever learn?"  One of his most recent (May 2024):  "So bad that FoxNews puts RFK Jr., considered the dumbest member of the Kennedy Clan, on their fairly conservative platform so much. ... He’s a Radical Left Lunatic whose crazy Climate Change views make the Democrat’s Green New Scam look Conservative."  Has this shift affected public opinion?  In 2019, I had a post on two questions that had been asked since the late 1990s, both of which showed a trend towards thinking that climate change was a serious problem.  One of those, "Do you think that global warming will pose a threat to you or your way of life in your lifetime?", has been asked a few times since then.  The updated results: 


The change may have slowed down, but it doesn't seem to have stopped, and definitely hasn't reversed.

I found two additional questions:  one that has been asked in Pew surveys, "I'd like your opinion about some possible international concerns for the United States. Do you think that each of the following is a major threat, a minor threat, or not a threat to the United States?...Global climate change," and a similar one by Gallup "Next, I am going to read you a list of possible threats to the vital interests of the United States in the next 10 years. For each one, please tell me if you see this as a critical threat, an important but not critical threat, or not an important threat at all.)...Global warming or climate change."  The averages (higher numbers mean greater threat).


Belief that climate change was a threat continued to rise in the Trump administration, but has fallen under Biden.  Why the difference between these questions and "threat to you or your way of life"?  Possibly it's because these refer to international threats and appear in the context of questions about other potential threats.  Greater international turmoil in the Biden administration may may have displaced concern about climate change as a threat to American interests (people aren't limited in the number of things that they can call critical threats, but I think there's some tendency to feel like you should make distinctions).  

There are other questions on climate change, which I may look at in another post.  But at least through 2020, concern with climate change continued to increase.  This may be because despite Trump's strong feelings on the issue, he didn't highlight it to the extent that he did with immigration, tariffs, and charges of election fraud.


[Data from the Roper Center for Public Opinion Research]

Thursday, October 10, 2024

Bonus

 About a month ago, I discovered that I am listed as the editor of the "EON International Journal of Arts, Humanities & Social Sciences."  In fact, I had never even heard of this journal, so I sent an e-mail to them telling me to remove my name.  I never heard back and am still listed as editor, so I decided to take further action.  They give their mailing address as:

 2055, Limestone Rd Ste 200C, Zip Code 19808 Wilmington,
Delaware, USA

I suspect that they are actually based outside the United States, since they clearly aren't familiar with American conventions for writing addresses, but Google Maps shows a building at 2055 Limestone Rd, so I wrote to them.  The concluding sentences in my letter are:

"Falsely claiming that I am the editor of a predatory journal is defamatory.  If you do not remove my name from the listing by October 15, I will consult with my attorney about the possibility of legal action."

Let's see if that has any effect.



Focused on the future, part 3

My last two posts were about answers to a question on confidence that "votes will be accurately cast and counted accurately" in elections, which has been asked a number of times since 2004.  As far as I know, there were no comparable questions before then.  However, a question on "dishonesty in the voting or counting of votes in your district" was asked in 1959 and 1964, and since 2004 there have been several "accurately cast and counted" questions that specified "at the facility where you vote."  I showed the overall results in a previous post,  and will look at party differences in this one.  There's a general tendency for people to be more positive about things that are closer to them, but my question is whether partisan differences in views on local elections might track partisan differences in views on national elections.    Here is average confidence in "the facility where you vote" by party:


It has declined for all groups, although the decline seems smaller for Democrats.  Independents are the least confident, which is probably because they tend to be more suspicious of politics in general.  Comparing confidence in national and local elections for each partisan group (red is local, blue is national):






They changes aren't parallel:  for Democrats and Independents, the gap between confidence in the national and local vote has become smaller; for Republicans, it's become bigger.  The results for Republicans aren't surprising, since their claims of fraud have focused on heavily Democratic places, like Philadelphia, Detroit, and Atlanta.  The general tendency seems to be for confidence in local voting to vary less than confidence in national voting.  

[Data from the Roper Center for Public Opinion Research]

Monday, October 7, 2024

Focused on the future, part 2

 In 2004, Gallup asked "How confident are you that, across the country, the votes for president will be accurately cast and counted in this year’s election – very confident, somewhat confident, not too confident or not at all confident?"  They have repeated the question a number of times, most recently just two weeks ago.  Their report says that the overall level of confidence has stayed about the same since 2008, but with a growing partisan division--Democrats becoming more confident and Republicans less confident.  The report merged "very confident" and "somewhat confident," which is a potentially important distinction, so I calculated the average, which is shown below:


The red dots indicate midterm elections (of course, those questions omitted the words "for president").  There was a substantial decline between 2004 and 2008--there were two surveys in 2004, with an average of about 3.0, two in 2006, with an average of about 2.85, one in 2007, also at 2.85, and two in October 2008, which averaged about 2.65 (about the same as the average in September 2024).   Why would this have happened?  I would have figured that confidence among Democrats would be low in 2004 because of  memories of 2000, and would rise as more time went by (especially after Democratic success in the 2006 midterms).  On the Republican side, it didn't seem like there was anything that should cause a dramatic change.  That would suggest an increase in overall confidence, not a decline.  

Breaking it down by party:



Relatively little change from 2004 to 2007, and then a large decline in Republican confidence between December 2007 and October 2008.  I could only get complete data for two surveys after 2008 but they showed further declines among Republicans.  The next figure shows the gap between Democrats and Republicans:


What might have caused the change in 2007-8?  Thinking back, I remembered that there were news stories about fraud in ACORN voter registration drives.  Also, in December 2007 Hillary Clinton was the frontrunner for the Democratic nomination, so it's possible that the decline among Republicans was a reaction to Obama--maybe his race, or his roots in Chicago politics.   The decline in confidence among Republicans meant that confidence was about the same in both parties.  Unfortunately, there don't seem to be any comparable questions before 2000, so we can't say if the lack of partisan difference was a return to normal.    

[Data from the Roper Center for Public Opinion Research]







Wednesday, October 2, 2024

Focused on the future

During the 2016 election campaign, Donald Trump refused to give a definite answer when asked whether he would accept the results if he lost:  as I recall, his usual response was something like "we'll see what happens."   A Fox News survey from late October of that year asked "If your candidate loses the presidential election in November, will you accept that his or her opponent won fair and square and will be the legitimate leader of the country?"  87% of the people who intended to vote for Hillary Clinton (or were leaning towards Clinton) said that they would; only 56% of those who intended to vote for Trump or were leaning towards Trump said that they would (34% said they would not and 10% weren't sure).  But it may be easier to say that you would be a good loser when you don't expect to lose.  The same survey asked "who do you think will win in November":  64% said Clinton, 26% Trump, and 10% weren't sure.  What if we adjust for expectations?  

Nearly all Clinton supporters expected her to win (93%), so it doesn't make much difference on that side:  for what it's worth, 88% of those who expected her to win and 79% of those who weren't sure or thought she would lose said they would accept Trump as the legitimate leader.  Among Trump supporters, 34% expected Clinton to win, 12% weren't sure, and 55% expected Trump to win.  64% of those who expected Clinton to win, 58% of those who weren't sure, and 51% of those who expected Trump to win said they would accept Clinton as the legitimate leader if she won.  That is, the gap in willingness to accept the other candidate as the legitimate leader is even larger when you adjust for expectations by comparing Clinton supporters who expected to win with Trump supporters who expected to win.  

Of course, the "fair and square . . . legitimate leader" question is open to interpretation:  someone might believe that a candidate had really gotten the votes, but had used unfair tactics.  Since 2004, Gallup has asked about confidence that votes "will be accurately cast and counted in this year’s election."  I'll look at that question in my next post.  

[Data from the Roper Center for Public Opinion Research]

Monday, September 23, 2024

Back to normal, part 2

 My last post suggested that the central result of a paper published in the American Economic Review was sensitive to the specification of the model:  specifically, that the evidence was weaker (and would just scrape in at "significant at the 10% level") with a negative binomial model rather than the models they fit:  a least-squares regression on the log of a ratio and a Poisson regression.  The negative binomial fits substantially better than the Poisson; although they can't be compared directly, there are several reasons to prefer the negative binomial over the least-squares regression (I won't go into them here).  The AER has a rigorous review process and the acknowledgments thank sixteen people by name, plus "other participants at numerous seminars for many constructive comments"--why didn't someone suggest (or insist) that they try a negative binomial regression?.  My ideas:

1.  A tendency to put too much faith in a combination of robust standard errors and "large" sample sizes at the expense of trying to find the right model, or something close to the right model.

2.   Taking the number of cases at face value.  The analysis includes about 35,000 municipalities, but many of them are very small:  80% are under 1,000.  On the average, there is about one collaborator per 1,000 people, so small villages (that is, most of them) generally don't provide much information.  Moreover, the analysis included a control for a larger geographical unit, department.  There were 95 of those, but in about half of them, every (or almost every) municipality had the same assignment in terms of service under  Pétain.  Those departments provide no information on the central question.  So you could regard the data as a (roughly) 50 by two table:  about 50 departments where troops from some municipalities served under  Pétain and others didn't.   You would lose something by analyzing it that way--the ability to adjust for other qualities of the municipalities.  But you would also gain something:  it would be easier to notice outliers or influential cases, and perhaps some unanticipated geographical patterns.

Tuesday, September 10, 2024

Back to normal

 This is a return to my usual kind of subject, although I may give an update on my adventures with predatory publishing in a future post.  

A few weeks ago, Andrew Gelman posted about a paper by Julia Cagé, Anna Dagorret, Pauline Grosjean, and Saumitra Jha that was published in the American Economic Review last year.  The paper argued that the experience of fighting in the battle of Verdun under Marshal Pétain created a sense of attachment, so that when Pétain turned to the extreme right and later headed the Vichy France regime, the municipalities that had supplied his troops (people from the same place generally served in the same unit) produced more collaborators.  Some critics had raised objections involving data quality, especially the list of collaborators, but I'll leave that aside and take the data as it is.

Elite leadership is important and frequently overlooked as an influence on public opinion, the authors seemed to have put a lot of effort into compiling and checking the data, the general method of analysis was appropriate, and there were a variety of robustness checks, so I was inclined to accept their conclusions.  But there were a few things that I wondered about.  They had two analyses, one a least squares regression with the log of collaborators per capita as the dependent variable, and the other a Poisson regression with the number of collaborators as the dependent variable (and including the log of the population as an independent variable).  In the first, the estimate for service with Pétain was .067 with a standard error of .018; in the second, the estimate was .190 with a standard error of .109.  They treated the first one as primary and described the second as showing that their "results were robust to Poisson estimation," but they didn't seem all that robust to me.  The Poisson estimate was almost three times as big, but the standard error was six times as big, so the 95% confidence interval went from -.024 to .404, or about -2.5% to +50%.   Also, the Poisson distribution applies when you count the number of events across a large number of independent cases, each with a small probability of experiencing the event.  But people in a town generally know and influence other people in the town, so one collaborator may recruit other collaborators, so the counts are likely to be "overdispersed" relative to what the Poisson distribution allows.  In this situation, the negative binomial distribution is appropriate, so I wanted to try it--maybe it would produce results more like those of the least squares regression.  I downloaded the replication data and reproduced their results and then fit a negative binomial regression.  The estimates for service with Pétain:

LS        Poisson        Negbin
.067        .190            .089
(.015)       (.014)        (.053)

The negative binomial regression fit much better than the Poisson regression.  The estimate was similar to that from the least squares regression, but the standard error was much bigger, and the 95% confidence interval is -.015 to .203.  Also, I show the ordinary standard errors--the robust, clustered standard errors that Cagé et al. used would be larger.  So there is only weak evidence, at best, that service under Pétain increased the number of collaborators.* 

In my next post, I'll discuss the more general implications of this analysis.  

*The also had results suggesting that service with Pétain affected electoral support for extreme right parties in the 1930s, and the points I've raised here don't apply to that analysis.  

Friday, September 6, 2024

It ain't me

 There is a journal called the EON International Journal of Arts, Humanities &Social Sciences.  I recently discovered that I am listed as the Editor .   I am not the editor--I had never even heard of this journal before, and would have declined if they asked me to be involved, since it looks pretty sketchy.  I have written to the publisher telling them to remove my name from their site but also wanted to announce it publicly just in case anyone has noticed.  


Wednesday, September 4, 2024

Those were different times

 From the New York Times:  "[Danzy] Senna, 53, was born in Boston, the daughter of a white, patrician mother . . .  and an African American father. Her parents . . .  were in the first cohort of interracial couples who could legally marry in the United States."  Hold on a minute--in 1967, the Supreme Court ruled that state laws prohibiting interracial marriage violated the Constitution, but only a minority of states (all Southern or border states) had such laws.  Some states had laws against interracial marriage until the 1950s and 1960s, and in those it would be reasonable to speak of the "first cohort" of interracial couples, but Massachusetts had repealed its prohibition on interracial marriage in 1843.  It wasn't the first in that respect--five of the thirteen original states (New York, New Jersey, Pennsylvania, Connecticut, and New Hampshire) never had laws against interracial marriage.  So although interracial marriages were rare, they've been around since the beginning of the United States.  The Times wasn't the only one to get this wrong--Senna's Wikipedia biography says that her parents "married in 1968, the year after interracial marriage became legal," and cites a Canadian Broadcast Company article, which says her parents "wed a year after interracial marriage became legal."  Why would multiple sources make this mistake?  It's not hard to find the information on differences in state laws (the Wikipedia article on interracial marriage in the United States has it.  

I would guess that it involves a change in the way of seeing racial discrimination--in the 1950s and 1960s, the prevailing view was that it was mostly a regional issue--the problem was to get the South to catch up with the rest of America.  Since that time, there has been a reaction against this view, which has sometimes overshot the mark.  You could say that we've gone from a realization that racism is present even in Boston to an assumption that Boston was and is no different from anywhere else.  

Of course, at the time her parents were married there was a lot of opposition to interracial marriage, even where it was legal.  In 1968, a Gallup poll asked "do you approve or disapprove of interracial marriage?"--20% approved and 73% disapproved.  A NORC survey asked whites "Do you think there should be laws against marriages between negroes and whites?"  53% said yes and 43% said no.  There were some regional differences, but they weren't as large as I expected--there was 53% agreement in New England and 37% in the Middle Atlantic states. So on this issue, law generally ran ahead of public opinion.   Educational differences were much bigger--about 75% of people with a grade school education and only 12% of college graduates said yes.  

[Data from the Roper Center for Public Opinion Research]

Friday, August 30, 2024

Now and then

Opinion surveys began in the 1930s, when the state of the economy was obviously a major issue.  However, questions on "the economy" didn't appear until much later--the earliest ones I have found are from 1976.  Before then, questions focused on specific aspects of the economy--there were a few on "business conditions," but more on changes in your own situation.   The first of those was in June 1941:  "Financially, are you better off, or worse off than last year? "  31% said better off, 18% worse off, and 51% about the same.  The figure shows the net sentiment (better-worse) every time this question was asked (with some variation in form) from 1941 until the mid-1970s.  


Most of the questions asked about the previous year, but some asked about the "last few" or "last two or three" years.  It looks like assessments of the last few years were more positive than assessments of the last year.  After the mid-1970s, the questions get more numerous.   Here are results of the "last year" question from 1976-95.  



Here are results of the "last few years," which has been included in the GSS since 1972:


There is clearly a difference:  the balance on the last few years question is almost always positive--the only exceptions are in 2010 and 2012--while the balance on the last year question is often negative.  I'm not sure why this would be the case, but it means that you need to have different standards for evaluating the two questions.   Common sense suggest that they will rise and fall together to some extent, but how close is the connection?  I'll look at that in a future post.

[Data from the Roper Center for Public Opinion Research]