Friday, December 13, 2024
Mood indigo
Thursday, December 5, 2024
Now what's the matter with Kansas?
A few weeks ago, I had a post on the geographical pattern of party support in the 2020 and 2024 elections at the state level. By historical standards, it was very similar: that is, the Republicans gained by about the same amount in all states. But it wasn't exactly the same, and there have been some reports of large shifts at the county level, so in this post I'll take a closer look.* In 2020, Joe Biden got 51.3% of the vote and Donald Trump got 46.8%, for a Democratic lead of 4.5; in 2024, Kamala Harris got 48.2% and Trump got 49.8%, for a "lead" of -1.6. That means that the difference in leads is 4.5+1.6=6.1. (You could also call that a swing of 3.05%, but I'll talk about the difference in leads). At the county level, the mean Republican gain was 3.5. Since that is smaller than the Republican gain in total votes, that means that Republicans gained more in larger counties. The twenty counties with the largest Republican gains included three with populations over 1,000,000 (Miami-Dade, The Bronx, and Queens). In contrast, only one of the twenty counties with the largest Democratic gains had a population of over 250,000, and three of them were under 1,000 (one of them was Loving County, Texas, where the Democratic vote surged from 4 out of 66 in 2020 to 10 out of 97 in 2024). In a regression of Republican gain on the log of population, the estimated coefficient is .27 with a standard error of .039.
When controls for share of the population that is Latin, Black, and Asian are added, the estimate for log population drops to .02 with a standard error of .04. The estimates for shares of Latin and Black population are positive, with t-ratios of over 10; the estimate for Asian share is also positive, with a t-ratio of about 2.5. Finally, if you add indicator variables for the states, the estimate for log of population is -.2 with a standard error of .04; the estimates for share of Latin, Black, and Asian all remain positive, although the t-ratio for the share of Asian drops to 1.8.
The figure shows the average county-level Republican gain by state without controls on the horizontal axis, and with the controls on the vertical axis (the zero point is Alabama, just because it's alphabetically the first state):
There does seem to be a a pattern in the relative shifts: Massachusetts and New Jersey had two of the largest pro-Republican shifts, and New York and Rhode Island were also pretty large (the data aren't quite complete, and Connecticut is missing). On the other side, Utah, Colorado, Kansas, and Oklahoma all had relatively small pro-Republican shifts. I'm not sure whether there's anything those states have in common apart from being in the same general part of the country, but it's worth thinking about. I wonder if some of it is a reversal of the 2012-16 shifts: that is, a return to the pre-Trump pattern?*Data are from https://github.com/tonmcg/US_County_Level_Election_Results_08-24
Monday, December 2, 2024
The three I's
A few of the questions asked about "illegal immigration"--they are the red dots. Republicans generally do better on the illegal immigration questions, but there aren't enough to say much about the trend. For the general immigration question, Republicans had a big advantage the first two times it was asked (1996 and early 2002), but the Democrats generally had an advantage from 2005 on. Their average was +3 under Bush, +1 under Obama, and +8 under Trump. The general immigration question was asked twelve times during the Trump administration, and the Democrats led every time. That changed under Biden: Republicans led all four times the question was asked, and in 2022 and 2023 the Republican lead was about as big as it had been in 1996 and 2002. Back in 2017, I said that a majority was in favor of letting unauthorized immigrants who were currently here stay and possibly obtain citizenship, but also in favor of stronger efforts to prevent further illegal immigration. That may help to explain the change under Biden: a "path to citizenship" had been a major issue under Obama and Bush, but disappeared under Biden since it was clear that Republicans wouldn't cooperate. That meant that attention was focused on the border, where the Republicans had the advantage. But that's not the whole story: in 2022 and 2023, the Democratic disadvantage on "immigration" was bigger than their disadvantage of "illegal immigration" had been ten years earlier.
Sunday, November 24, 2024
Make Trump less terrible again?
There have been a number of questions of the form "what kind of president X will be [has been]: great, good, average, poor, or terrible." I calculated a net score for Donald Trump, with great +2, good +1..... terrible -2."
The first time it was asked was in April 2011, when there was some talk of him running for president; the second was in January 2016, when he was a contender but not yet the clear favorite. It was asked again in March, August, and October 2016, when he was the favorite and then the Republican nominee. Then there was one in December 2012, after he was elected but before he took office, another in June 2020, and two in early 2021 (Jan/Feb and March). Finally, there were two during the 2024 campaign. Basically, there seems to have been a drop as people got to know him during the 2016 campaign, but a pretty steady rise after that point. January 6 doesn't seem to have hurt him--his score in late January/early February 2021 was a little higher than it had been in June 2020. But then there is the exceptionally favorable assessment in December 2020--still more negative than positive, but only slightly. What explains it?
Here is the percent saying that he will be (or was) a great president:
That's a pretty steady increase as he solidified his position among Republicans (although it's worth noting that his highest figure is still below the 23% who rated Barack Obama as great in December 2016).
Here, the December 2016 survey stands out, with an unusually low number saying that he would be terrible. Apart from that, it didn't vary much between 2016 and 2024.
I think that the drop in "terrible" ratings in December 2016 reflects "diffuse support" for the political process: people were willing to put their doubts aside and give the newly elected president a chance. Unfortunately no parallel question was asked for Biden immediately after his election in 2020. Hopefully it will be asked again for Trump in the next month or so. I would guess that the effect has weakened or disappeared, partly because people know him better, and partly because general support for the process has become weaker: that is, the share who expect him to be terrible would be about the same as during the campaign.
[Data from the Roper Center for Public Opinion Research]
Thursday, November 21, 2024
Public opinion on immigration: the Trump and Biden years
Andrew Gelman recently reposted something I wrote on immigration in 2016. That reminded me that I should update another old post on immigration, which summarized answers to a question on whether immigration should be "kept at its present level, increased, or decreased." Opinion had shifted towards "decreased" until the mid-1990s, but steadily moved towards "increased" after that. As of 2016, "decreased" was still more common than "increased," but it was getting close. Here is the updated figure:
In 2020, the balance was on the side of increased for the first time (34% increased, 36% present level, 28% decreased), but that was followed by four years of shifts against immigration, so opinions are now about where they were in 2002-4.
On some issues, opinions shift with party control of the presidency. Sometimes there is a general "thermostatic" movement against the administration's policy, or what people perceive as its policy: when a Democrat is in office, people shift away from support for spending on social programs; when a Republican is in office, they shift towards more support. There's no sign of that here. Sometimes there are shifts that differ by party: people see conditions as worse when the other party is in power. That is, a Republican might think that levels of immigration were OK under Trump but out of control under Biden, and shift from "present level" to "decreased." I got the breakdowns by party for some of the recent surveys:
People of all parties have generally moved in the same direction at the same time. There was a substantial increase in the gap between parties in 2014-19, but only a slight increase since then. To put things in another way, during Biden's term both Democrats and Republicans moved towards saying that immigration should be decreased, and the movement was only slightly larger among Republicans. That leads to a question of what they were reacting to. Most people don't have much personal experience that would help them in judging levels of immigration; but to the extent that views reflect news coverage, you'd expect growing divergence based on differences in coverage between mainstream and conservative media.
[Data from the Roper Center for Public Opinion Research]
Friday, November 15, 2024
All together now?
Saturday, November 9, 2024
Group differences in voting, 2016-2024
After the 2020 election, I had a post showing how different groups voted in 2016 and 2020, according to Edison exit polls. This post updates that with information from the 2024 polls as reported by CNN. :
% For Trump
2016 2020 2024
Men 52% 49% 55%
Women 41% 43% 45%
White 57% 57% 57%
Black
8% 12% 13%
Latino/a 28% 32%
46%
Asian-Am. 27% 31% 39%
White Men 62% 58% 60%
White Women 52% 55% 53%
Black Men 13% 18% 21%
Black Women 4% 8% 7%
Latino 32% 36% 55%
Latina 25% 28% 38%
Age 18-29
36% 35% 43%
30-44 52% 55% 48%
45-65 52% 49% 54%
65+ 52% 51% 49%
Urban 34% 37% 38%
Suburban 49% 48% 51%
Rural 61% 54% 64%
White Coll. 48% 49% 45%
White non-C 66% 64% 66%
Non-W coll 22% 27% 32%
Non-W non-coll 20% 26% 34%
under $50,000 41% 42% 50%
$50K-49,999 49% 43% 51%
$100,000+ 47% 54% 46%
LGBT 14% 28% 13%
Not LGBT 47% 48% 53%
Veterans
60% 52% 65%
non-Vets 44% 46% 48%
W. Evangelical 80% 76% 82%
All others 34% 37% 40%
Although the exit polls have large samples, the way that they are constructed means that the group estimates can still have fairly large errors, so I focus on the ones that showed a trend over the three elections. The ones for which I see a trend are in boldface. Some people talked about his Trump's gains among "minority" voters in 2020. I was skeptical then, but now I have to agree that there is something going on. With black voters, the share is still small enough that there's room for doubt, but Trump definitely made gains among Latin and Asian voters. They may be following the general path of assimilation previously followed by white ethnic groups like Irish Catholics.
The estimated gender gap among whites was 10% in 2016, 3% in 2020, and 7% in 2024: that is, despite the Dobbs decision, it didn't change much. But it did increase among Latins and maybe blacks. Trump has made solid gains among Latinos and black men, smaller gains among Latinas, and smaller and possibly no gains among black women.
Trump made gains with higher income people in 2020, and in 2024 lost ground with them while gaining with low-income people. That is interesting if it holds up, but overall the differences are small. Educational differences are large in all elections, and didn't change much.
Friday, November 8, 2024
Inflation and incumbency
Many people say that inflation was a major cause of the Democratic loss on Tuesday. But inflation hasn't stopped governing parties from being re-elected in the past. The figure shows margin in the popular vote and inflation in the previous term.
Under Biden, inflation averaged 5.0% a year, and the Democrats trail in the popular vote by 3.0 (47.7% to 50.7%), as I write this (it will probably get a little closer as more votes are counted in California). In Richard Nixon's first term, inflation averaged 5.0%, and he won in a landslide. The correlation between average inflation and margin is just -.22.
What if people consider the trend--is inflation higher or lower than it was in the past? I tried average inflation in the administration minus average inflation in the previous administration, and found a stronger relationship: a regression coefficient of -2.25 with a standard error of 1.04. After a little experimentation I found an even stronger relationship with inflation in the past year minus average in the previous administration*:
The correlation is -.65 and the regression coefficient is -2.8 with a standard error of 1.0. There are two big outliers: 1964 and 1972. Apart from that, all of the elections are close to the predicted values. Of course, inflation isn't the only important economic condition. I added per-capita GDP growth in the first three quarters of the election year (from Ray Fair) and a dummy variable for for an incumbent president running. The estimates:
Constant -1.65 (2.92)
relative inf -1.93 (0.79)
GDP 1.35 (0.65)
Incumbent 5.07 (3.35)
The current value of relative inflation is .92. The model suggests that if it were zero (ie, the same as inflation under Trump, which averaged about 2%), Trump would still lead, but the gap would be only about half as large.
The estimated effect of incumbency is large and the predicted value would favor the Democrats if Joe Biden had been the candidate. The standard error is large, meaning that there's a lot of uncertainty about the size of the incumbency advantage (or even if there is any advantage). But even if it were smaller, I don't think it should be interpreted as meaning that Biden would have done better than Harris. A large part of the incumbency advantage comes from the ability to go on TV and speak to the nation--sometimes to try to get support for their policies, but sometimes speaking as leader of the nation rather than leader of a party (e. g., after a natural disaster). Some presidents have been better at that than others, but they all could do it effectively at least some of the time. Trump was an exception--it's just not the way he operates. Biden was also an exception: he couldn't do it very well, and for much of his term he didn't even try (or if he did, I've forgotten). Part of that was age, but even in his prime it was a weak point, maybe because he was from a small state where personal relations mattered more than media skills. There have also been general changes that probably reduce the advantage--the fragmentation of the media means a presidential address doesn't reach as many people, and increased partisanship means it's harder to win them over.
*This is similar to Robert Gordon's "inflation acceleration" and "excess inflation."
Tuesday, November 5, 2024
Last post before the election
The polls suggest a very close election, but there is often some systematic polling error--in 2016 and 2020 Republican support was underestimated. One potentially relevant factor is that there was unusually high turnout in 2020--the rate rose from 60.1% in 2016 to 66% in 2020. Infrequent voters tend to be less educated, which today means that they tend to vote Republican. That may have contributed to the underestimation of Republican vote in 2020. It seems likely that turnout will be lower this time, which reduces the chance that the polls will underestimate the Republican vote and to the extent that they have tried to correct by giving more weight to less educated respondents, makes them more likely to overestimate the Republican vote. So if I had to guess, I'd say the error is likely to be in the other direction this time--that Harris will run ahead of the polls.
The figure shows turnout in 2020 and 2016 by state (from this source):
Turnout increased in every state--the reference line shows a uniform increase. There are several states that had a relatively large increase, but only one of them is a swing state--Arizona.
A couple of bonuses: first, an update on questions about confidence in "the wisdom of the American people" in making political decisions or election choices:
Second, an outfit called EON Journals falsely lists me as the editor of one of their journals. I asked them by e-mail to remove my name, and got no response. I followed by sending a letter to the address they listed. I got it back today, with a notice saying "Return to sender/Attemped--Not known/Unable to forward." So their mailing address is fake too.
[Some data from the Roper Center for Public Opinion Research]
Thursday, October 31, 2024
Significant at the 32% level
Nature Human Behavior recently published an article with the title "Underrepresented minority faculty in the USA face a double standard in promotion and tenure decisions." They also published a comment on it, which said the authors "find double standards negatively applied to scholars of colour, and especially women of colour, even after accounting for scholarly productivity." Science published a piece on the article, titled "Racial bias can taint the academic tenure process—at one particular point." So the study must have found strong evidence, right?
Their key findings involved college-level (e. g., Arts & Sciences, Business, Fine Arts) tenure and promotion committees, where black or Hispanic candidates got more negative votes than White or Asian candidates. I'll look at the probability of getting a unanimous vote of support, where they found the strongest evidence of difference. In a binary logistic regression with several control variables (university, discipline, number of external grants, time in position, whether the decision involves tenure or promotion to full professor), the estimated effect of URM status is -.581 with a standard error of .246. That gives a t-ratio of -2.36 and a p-value of .018. If you prefer odds ratios, it's an estimate of .56 and a 95% confidence interval of .35 to .91. That's reasonably strong evidence by conventional standards.
What about scholarly productivity? They calculated the "h-index," which is based on citation counts,* and standardized it to have a mean of zero and standard deviation of one. If you add it as a variable:
est se t-ratio P
H .375 .129 2.91 .004
URM -.301 .299 1.01 .315
Now the estimated effect of being a member of a URM is nowhere near statistical significance. It is still large (the odds ratio is about .75 and the 95% confidence interval goes as low as .41), so the conclusion shouldn't be that there is little or no discrimination--there's just not enough information in the sample to say much either way.**
But the authors didn't report the previous analysis--I calculated it from the replication data. They gave an analysis including the H-index, URM status, and the interaction (product) of those variables:
est se t-ratio P
H .330 .128 2.59 .010
URM -.041 .337 0.12 .903
H*URM 1.414 .492 2.87 .004
That means that H has a different effect for URM and White or Asian (WA) candidates: for WA, it's .33 and for URM it's .33+1.41=1.74. The URM coefficient gives the estimated effect of URM status at an H-index of zero. At other values, of the H-index, the estimated effect of URM status is -.041+1.08*H. For example, the 10% percentile of the H-index is about -1, so the estimated effect is about -1.12. The 90th percentile of the H-index is about 1, so the estimated effect of URM status is about 1.04. That is, URM faculty with below-average H-indexes are have a lower chance of getting unanimous support compared to WA candidates with the same H-index, but URM faculty with above-average H-indexes have a higher chance. This is a "double standard" in the sense of URM and WA faculty being treated differently, but not in the sense of URM faculty consistently being treated worse.
The authors describe this as "differential treatment in outcomes is present for URM faculty with a below-average H-index but not for those with an average or above-average H-index." They suggest an "intriguing question for future research: do URM faculty with an above average h-index perform better than non-URM faculty with the same h-index?" But the interaction model is symmetrical--what justifies treating the estimate for low h-indexes as a finding and the estimate for higher h-indexes as just an "intriguing possibility"? You could fit a model in which URM faculty are disadvantaged at lower levels of productivity but there is no difference at moderate or high levels of productivity. I've done this, and it fits worse than the standard interaction model, although there's not enough information to make a definitive choice between them.
The result about the interaction between URM status and h-index is interesting, but doesn't support the claim of a general bias against URM faculty. So why is this study being hyped as strong evidence of bias? One obvious factor is that many people believe or want to believe that there is a lot of bias in universities, so they'll seize on anything that seems to support this claim. A second is that people get confused by interaction effects. But I think there's a third contributing factor: Nature and Science come from a tradition of scientific reports: just give your "findings," not a lot of discussion of how you got there and things you tried along the way. Journals in sociology, political science, and economics come from a more literary tradition--leisurely essays in which people developed their ideas or reviewed previous work. This tradition continued even after quantitative research appeared: articles are longer and have more descriptive statistics and more discussion of alternative models. If this paper had been submitted to a sociology journal, I'm pretty sure some reviewer would have insisted that if you're going to have an interaction model, you also have to show the model with only the main effects of H-index and URM. That would have made it clear that the data doesn't provide strong evidence of discrimination. It also might have led to noticing that there's a lot of missing data for the H-index (about 30% of the cases), which is another source of uncertainty.
*The largest number H for which you have H articles with H or more citations in Google Scholar.
**This is not the authors' fault--it's hard to collect this kind of data, and as far as I know they are the first to do so.
bias can taint the academic tenure process—at one particular point
Wednesday, October 23, 2024
Why so close?
Most recent polls show the presidential race as pretty much neck-and-neck. Even if the polls have some systematic error, it seems safe to say that the race will be close by historical standards. How is Donald Trump staying close, despite all his baggage? I think that the main reason is that most voters remember things as pretty good during the first three years of his administration. Of course, things changed in the last year, but Trump mostly stepped aside and let state and local governments deal with Covid and the protests after the murder of George Floyd--he offered opinions about what should be done, but didn't make much effort to implement them. As a result, people give him a pass--if they didn't like what happened, they blame their governor or mayor. What about his attempt to overturn the results of the 2020 election, culminating in January 6? Here, the key thing is that Republican elites have stuck with him, saying that there were problems with the election, that Democrats did similar things in the past, or are treating him unfairly now, or are planning to do worse things in the future. These kind of arguments don't have to persuade to be effective, they just have to muddy the water, so that voters think of the whole issue as "just politics"--a confusing mass of charges and countercharges.
However, I think that Trump also has a distinct positive--the perception that he says whatever is on his mind. Since 1988, questions of the form "Do you think that ______ says what he believes most of the time, or does he say what he thinks people want to hear?" have been asked about presidents and presidential candidates. The table shows the average percent "says what they believe" minus "thinks people want to hear" and the number of surveys that asked about different figures:
Avg N
Trump 19.50 2
McCain 6.29 7
Bradley 5.50 2
GWB 2.00 21
Perot 2.00 2
Obama 0.67 12
Giuliani -7.00 1
Dukakis -11.00 1
Dole -18.00 1
Hillary -20.40 5
Romney -21.80 5
Gore -23.50 8
Kerry -26.08 12
Bill -26.63 8
GHWB -30.00 3
*Of course, with "she" rather than "he" for women. The earliest (1988) versions had slightly different wording.
Wednesday, October 16, 2024
A new climate?
In the years before Trump, most Republican politicians tried to avoid taking a clear position on climate change: they generally said it was complicated and uncertain, and that more research was needed. Trump, however, has a clear and consistent position: it's a hoax. According to the Trump Twitter Archive, his first mention of climate change on Twitter was in 2012: "Why is @BarackObama wasting over $70 Billion on 'climate change activities?' Will he ever learn?" One of his most recent (May 2024): "So bad that FoxNews puts RFK Jr., considered the dumbest member of the Kennedy Clan, on their fairly conservative platform so much. ... He’s a Radical Left Lunatic whose crazy Climate Change views make the Democrat’s Green New Scam look Conservative." Has this shift affected public opinion? In 2019, I had a post on two questions that had been asked since the late 1990s, both of which showed a trend towards thinking that climate change was a serious problem. One of those, "Do you think that global warming will pose a threat to you or your way of life in your lifetime?", has been asked a few times since then. The updated results:
The change may have slowed down, but it doesn't seem to have stopped, and definitely hasn't reversed.
I found two additional questions: one that has been asked in Pew surveys, "I'd like your opinion about some possible international concerns for the United States. Do you think that each of the following is a major threat, a minor threat, or not a threat to the United States?...Global climate change," and a similar one by Gallup "Next, I am going to read you a list of possible threats to the vital interests of the United States in the next 10 years. For each one, please tell me if you see this as a critical threat, an important but not critical threat, or not an important threat at all.)...Global warming or climate change." The averages (higher numbers mean greater threat).
Belief that climate change was a threat continued to rise in the Trump administration, but has fallen under Biden. Why the difference between these questions and "threat to you or your way of life"? Possibly it's because these refer to international threats and appear in the context of questions about other potential threats. Greater international turmoil in the Biden administration may may have displaced concern about climate change as a threat to American interests (people aren't limited in the number of things that they can call critical threats, but I think there's some tendency to feel like you should make distinctions).
There are other questions on climate change, which I may look at in another post. But at least through 2020, concern with climate change continued to increase. This may be because despite Trump's strong feelings on the issue, he didn't highlight it to the extent that he did with immigration, tariffs, and charges of election fraud.
[Data from the Roper Center for Public Opinion Research]
Thursday, October 10, 2024
Bonus
About a month ago, I discovered that I am listed as the editor of the "EON International Journal of Arts, Humanities & Social Sciences." In fact, I had never even heard of this journal, so I sent an e-mail to them telling me to remove my name. I never heard back and am still listed as editor, so I decided to take further action. They give their mailing address as:
2055, Limestone Rd Ste 200C, Zip Code 19808 Wilmington,
Delaware, USA
I suspect that they are actually based outside the United States, since they clearly aren't familiar with American conventions for writing addresses, but Google Maps shows a building at 2055 Limestone Rd, so I wrote to them. The concluding sentences in my letter are:
"Falsely claiming that I am the editor of a predatory journal is defamatory. If you do not remove my name from the listing by October 15, I will consult with my attorney about the possibility of legal action."
Let's see if that has any effect.
Focused on the future, part 3
My last two posts were about answers to a question on confidence that "votes will be accurately cast and counted accurately" in elections, which has been asked a number of times since 2004. As far as I know, there were no comparable questions before then. However, a question on "dishonesty in the voting or counting of votes in your district" was asked in 1959 and 1964, and since 2004 there have been several "accurately cast and counted" questions that specified "at the facility where you vote." I showed the overall results in a previous post, and will look at party differences in this one. There's a general tendency for people to be more positive about things that are closer to them, but my question is whether partisan differences in views on local elections might track partisan differences in views on national elections. Here is average confidence in "the facility where you vote" by party:
It has declined for all groups, although the decline seems smaller for Democrats. Independents are the least confident, which is probably because they tend to be more suspicious of politics in general. Comparing confidence in national and local elections for each partisan group (red is local, blue is national):
They changes aren't parallel: for Democrats and Independents, the gap between confidence in the national and local vote has become smaller; for Republicans, it's become bigger. The results for Republicans aren't surprising, since their claims of fraud have focused on heavily Democratic places, like Philadelphia, Detroit, and Atlanta. The general tendency seems to be for confidence in local voting to vary less than confidence in national voting.
[Data from the Roper Center for Public Opinion Research]
Monday, October 7, 2024
Focused on the future, part 2
In 2004, Gallup asked "How confident are you that, across the country, the votes for president will be accurately cast and counted in this year’s election – very confident, somewhat confident, not too confident or not at all confident?" They have repeated the question a number of times, most recently just two weeks ago. Their report says that the overall level of confidence has stayed about the same since 2008, but with a growing partisan division--Democrats becoming more confident and Republicans less confident. The report merged "very confident" and "somewhat confident," which is a potentially important distinction, so I calculated the average, which is shown below:
The red dots indicate midterm elections (of course, those questions omitted the words "for president"). There was a substantial decline between 2004 and 2008--there were two surveys in 2004, with an average of about 3.0, two in 2006, with an average of about 2.85, one in 2007, also at 2.85, and two in October 2008, which averaged about 2.65 (about the same as the average in September 2024). Why would this have happened? I would have figured that confidence among Democrats would be low in 2004 because of memories of 2000, and would rise as more time went by (especially after Democratic success in the 2006 midterms). On the Republican side, it didn't seem like there was anything that should cause a dramatic change. That would suggest an increase in overall confidence, not a decline.
Breaking it down by party:
Relatively little change from 2004 to 2007, and then a large decline in Republican confidence between December 2007 and October 2008. I could only get complete data for two surveys after 2008 but they showed further declines among Republicans. The next figure shows the gap between Democrats and Republicans:
What might have caused the change in 2007-8? Thinking back, I remembered that there were news stories about fraud in ACORN voter registration drives. Also, in December 2007 Hillary Clinton was the frontrunner for the Democratic nomination, so it's possible that the decline among Republicans was a reaction to Obama--maybe his race, or his roots in Chicago politics. The decline in confidence among Republicans meant that confidence was about the same in both parties. Unfortunately, there don't seem to be any comparable questions before 2000, so we can't say if the lack of partisan difference was a return to normal.
[Data from the Roper Center for Public Opinion Research]
Wednesday, October 2, 2024
Focused on the future
Monday, September 23, 2024
Back to normal, part 2
My last post suggested that the central result of a paper published in the American Economic Review was sensitive to the specification of the model: specifically, that the evidence was weaker (and would just scrape in at "significant at the 10% level") with a negative binomial model rather than the models they fit: a least-squares regression on the log of a ratio and a Poisson regression. The negative binomial fits substantially better than the Poisson; although they can't be compared directly, there are several reasons to prefer the negative binomial over the least-squares regression (I won't go into them here). The AER has a rigorous review process and the acknowledgments thank sixteen people by name, plus "other participants at numerous seminars for many constructive comments"--why didn't someone suggest (or insist) that they try a negative binomial regression?. My ideas:
1. A tendency to put too much faith in a combination of robust standard errors and "large" sample sizes at the expense of trying to find the right model, or something close to the right model.
2. Taking the number of cases at face value. The analysis includes about 35,000 municipalities, but many of them are very small: 80% are under 1,000. On the average, there is about one collaborator per 1,000 people, so small villages (that is, most of them) generally don't provide much information. Moreover, the analysis included a control for a larger geographical unit, department. There were 95 of those, but in about half of them, every (or almost every) municipality had the same assignment in terms of service under Pétain. Those departments provide no information on the central question. So you could regard the data as a (roughly) 50 by two table: about 50 departments where troops from some municipalities served under Pétain and others didn't. You would lose something by analyzing it that way--the ability to adjust for other qualities of the municipalities. But you would also gain something: it would be easier to notice outliers or influential cases, and perhaps some unanticipated geographical patterns.
Tuesday, September 10, 2024
Back to normal
This is a return to my usual kind of subject, although I may give an update on my adventures with predatory publishing in a future post.
A few weeks ago, Andrew Gelman posted about a paper by Julia Cagé, Anna Dagorret, Pauline Grosjean, and Saumitra Jha that was published in the American Economic Review last year. The paper argued that the experience of fighting in the battle of Verdun under Marshal Pétain created a sense of attachment, so that when Pétain turned to the extreme right and later headed the Vichy France regime, the municipalities that had supplied his troops (people from the same place generally served in the same unit) produced more collaborators. Some critics had raised objections involving data quality, especially the list of collaborators, but I'll leave that aside and take the data as it is.
Elite leadership is important and frequently overlooked as an influence on public opinion, the authors seemed to have put a lot of effort into compiling and checking the data, the general method of analysis was appropriate, and there were a variety of robustness checks, so I was inclined to accept their conclusions. But there were a few things that I wondered about. They had two analyses, one a least squares regression with the log of collaborators per capita as the dependent variable, and the other a Poisson regression with the number of collaborators as the dependent variable (and including the log of the population as an independent variable). In the first, the estimate for service with Pétain was .067 with a standard error of .018; in the second, the estimate was .190 with a standard error of .109. They treated the first one as primary and described the second as showing that their "results were robust to Poisson estimation," but they didn't seem all that robust to me. The Poisson estimate was almost three times as big, but the standard error was six times as big, so the 95% confidence interval went from -.024 to .404, or about -2.5% to +50%. Also, the Poisson distribution applies when you count the number of events across a large number of independent cases, each with a small probability of experiencing the event. But people in a town generally know and influence other people in the town, so one collaborator may recruit other collaborators, so the counts are likely to be "overdispersed" relative to what the Poisson distribution allows. In this situation, the negative binomial distribution is appropriate, so I wanted to try it--maybe it would produce results more like those of the least squares regression. I downloaded the replication data and reproduced their results and then fit a negative binomial regression. The estimates for service with Pétain:
LS Poisson Negbin
.067 .190 .089
(.015) (.014) (.053)
The negative binomial regression fit much better than the Poisson regression. The estimate was similar to that from the least squares regression, but the standard error was much bigger, and the 95% confidence interval is -.015 to .203. Also, I show the ordinary standard errors--the robust, clustered standard errors that Cagé et al. used would be larger. So there is only weak evidence, at best, that service under Pétain increased the number of collaborators.*
In my next post, I'll discuss the more general implications of this analysis.
*The also had results suggesting that service with Pétain affected electoral support for extreme right parties in the 1930s, and the points I've raised here don't apply to that analysis.
Friday, September 6, 2024
It ain't me
There is a journal called the EON International Journal of Arts, Humanities &Social Sciences. I recently discovered that I am listed as the Editor . I am not the editor--I had never even heard of this journal before, and would have declined if they asked me to be involved, since it looks pretty sketchy. I have written to the publisher telling them to remove my name from their site but also wanted to announce it publicly just in case anyone has noticed.
Wednesday, September 4, 2024
Those were different times
From the New York Times: "[Danzy] Senna, 53, was born in Boston, the daughter of a white, patrician mother . . . and an African American father. Her parents . . . were in the first cohort of interracial couples who could legally marry in the United States." Hold on a minute--in 1967, the Supreme Court ruled that state laws prohibiting interracial marriage violated the Constitution, but only a minority of states (all Southern or border states) had such laws. Some states had laws against interracial marriage until the 1950s and 1960s, and in those it would be reasonable to speak of the "first cohort" of interracial couples, but Massachusetts had repealed its prohibition on interracial marriage in 1843. It wasn't the first in that respect--five of the thirteen original states (New York, New Jersey, Pennsylvania, Connecticut, and New Hampshire) never had laws against interracial marriage. So although interracial marriages were rare, they've been around since the beginning of the United States. The Times wasn't the only one to get this wrong--Senna's Wikipedia biography says that her parents "married in 1968, the year after interracial marriage became legal," and cites a Canadian Broadcast Company article, which says her parents "wed a year after interracial marriage became legal." Why would multiple sources make this mistake? It's not hard to find the information on differences in state laws (the Wikipedia article on interracial marriage in the United States has it.
I would guess that it involves a change in the way of seeing racial discrimination--in the 1950s and 1960s, the prevailing view was that it was mostly a regional issue--the problem was to get the South to catch up with the rest of America. Since that time, there has been a reaction against this view, which has sometimes overshot the mark. You could say that we've gone from a realization that racism is present even in Boston to an assumption that Boston was and is no different from anywhere else.
Of course, at the time her parents were married there was a lot of opposition to interracial marriage, even where it was legal. In 1968, a Gallup poll asked "do you approve or disapprove of interracial marriage?"--20% approved and 73% disapproved. A NORC survey asked whites "Do you think there should be laws against marriages between negroes and whites?" 53% said yes and 43% said no. There were some regional differences, but they weren't as large as I expected--there was 53% agreement in New England and 37% in the Middle Atlantic states. So on this issue, law generally ran ahead of public opinion. Educational differences were much bigger--about 75% of people with a grade school education and only 12% of college graduates said yes.
[Data from the Roper Center for Public Opinion Research]
Friday, August 30, 2024
Now and then
Opinion surveys began in the 1930s, when the state of the economy was obviously a major issue. However, questions on "the economy" didn't appear until much later--the earliest ones I have found are from 1976. Before then, questions focused on specific aspects of the economy--there were a few on "business conditions," but more on changes in your own situation. The first of those was in June 1941: "Financially, are you better off, or worse off than last year? " 31% said better off, 18% worse off, and 51% about the same. The figure shows the net sentiment (better-worse) every time this question was asked (with some variation in form) from 1941 until the mid-1970s.
Most of the questions asked about the previous year, but some asked about the "last few" or "last two or three" years. It looks like assessments of the last few years were more positive than assessments of the last year. After the mid-1970s, the questions get more numerous. Here are results of the "last year" question from 1976-95.
Here are results of the "last few years," which has been included in the GSS since 1972:
There is clearly a difference: the balance on the last few years question is almost always positive--the only exceptions are in 2010 and 2012--while the balance on the last year question is often negative. I'm not sure why this would be the case, but it means that you need to have different standards for evaluating the two questions. Common sense suggest that they will rise and fall together to some extent, but how close is the connection? I'll look at that in a future post.
[Data from the Roper Center for Public Opinion Research]
Monday, August 19, 2024
Too many people
In 1947, the Gallup poll asked "Do you think this town [city] would be better off or worse off if more people lived here?" 31% said better off, 46% worse off, 9% the same, 5% that it depended on the type of people, and 10% weren't sure. There was a parallel question about your state: for this, it was 40% better off, 27% worse, 14% the same, and 20% no opinion. So people were more positive about having more people in their state than in their town. These questions were asked to a randomly selected half of the sample; the other half was asked "There are about 140 million people today in the United States. Do you think this country would be better off or worse off if there were more people living here?" Only 16% said better off, with 56% saying worse off, 14% the same, 3% that it would depend, and 11% weren't sure. That is, people were more negative about having more people in America than in their city or state. Why? One possibility is that 140 million sounds like a large number, so that mentioning it made people less inclined to say that we would benefit from having more. But another possibility is that increases in the population of your town or state could involve people moving from other towns or states--an increase in the American population would have to involve immigration.*
As far as group differences, people who lived in urban areas, more educated people, and people in New England and the Middle Atlantic states were more likely to say that a larger population would be good. These qualities are all associated with "cosmopolitansim," supporting the idea that answers are related to attitudes towards immigration (the group differences for opinions about your city and state were generally smaller and had different patterns).** However, negative opinions were more numerous than positive ones in every group. There was little or no difference by party identification.
This is one more piece of evidence for something I've mentioned before: Americans were not keen on allowing more immigration during the 1950s and 1960s--in 1964, when the restrictive 1924 law was still in force, more people favored reducing immigration than increasing it.
*Over the long run, it could be natural increase, but people seem to think about the near future if the time frame isn't specified.
**Most Gallup surveys asked about religion, but this one did not.
[Data from the Roper Center for Public Opinion Research]
Friday, August 9, 2024
Governors and presidents
When people were talking about who Kamala Harris might choose as her running mate, Josh Shapiro's high approval rating was often mentioned. I hadn't heard anything about how Tim Walz stood in that respect, so I looked and found that Morning Consult tracks the approval rating for all governors. As of July 24, Shapiro's net rating (favorable minus unfavorable) was +25, which is good but not exceptional (tied for 16th). Walz was +13, which is below average but not exceptional either (tied for 36th). There was no obvious pattern in the ratings, although there may be some tendency for governors in smaller states to have higher approval ratings:
The thing that I found most striking was simply that they were almost all positive--only two were "underwater" and those were at -1. In contrast, Joe Biden has been underwater for most of his time in office, Donald Trump was for almost all of his, Barack Obama for about a third of his, and George W. Bush for about his last three years in office. That led me to wonder if there was a general tendency for governors to get higher approval ratings than Presidents--often people feel more positive about things that are closer to them.
There have been several questions about approval of the governor of your state, ranging from 1954 to 2023.* The figure shows net approval ratings for governors, and presidential approval at the same times.
Gubernatorial approval ratings have not been consistently higher than presidential--they were lower in six of the first seven times, and have been higher in the last three. With only ten cases, it's hard to be confident about anything, but they suggest that the 21st century discontent is specifically about national politics, not about politics in general.
*The Morning Consult data go back to 2017, but require a subscription which is beyond what my research budget can afford.