Saturday, November 22, 2014

Self-interest?

A recent  piece in the New York Times Jason Weeden and Robert Kuzban tells us that self-interest influences political views.  Along with some uncontroversial examples, there was one that caught my attention:
"Those who do best under meritocracy — people who have a lot of education and excel on tests — are far more likely to want to reduce group-based preferences, like affirmative action."  This didn't sound right to me:  if it were true, universities, especially elite universities, would be centers of opposition to affirmative action.  

Since "affirmative action" can mean a different things to different people, I looked for questions that asked directly about test scores.  There were't many, but I found one in a CBS News/60 Minutes/Vanity Fair survey from 2013.   It asked "Which phrase comes closest to how you would describe the SAT tests that are used for college admissions in the United States:  a successful equalizer, a failed ideal, a waste of time, or a necessary evil"?  The first answer can be regarded as positive, the second and third as negative, and the last one as neutral.  Using this classification, here is the breakdown by education:

                     Pos        Neg    Pos-Neg
less than HS          33   43   23     +10
HS                    25   40   35     -10
Some college          22   44   34     -12
College graduate      21   48   31     -10
Grad School           17   44   39     -22

So people without a high school degree have the most favorable opinions, and people with graduate education are the least favorable.  You get a similar pattern with income:  people with incomes under $30,000 are the most favorable and those with incomes of over $250,000 (admittedly a small group) are most unfavorable.  

Of course, the general point that a lot of opinions have a straightforward relation to self-interest is valid, but as this example shows, there are exceptions.

PS:  I promised an examination of own vote vs. predicted winner in my last post.  Own vote predicted 31 states correctly (that is, there were 31 states in which a majority of the sample said they would vote for X and X won), while the predicted winner actually won in 29 states.  So it was a slight advantage for own vote, but not decisive.    

[Data from the Roper Center for Public Opinion Research]

Monday, November 17, 2014

Post-election coverage

After the election, Justin Wolfers wrote a column saying that questions about who people expected to win were better predictors of election outcomes than questions about who people intended to vote for.  He offered this explanation:  "Asking voters about their expectations allows them to reflect on everything they know about the race — which way they currently intend to vote, how likely they are to vote, whether they’re persuadable, the voting intentions of their friends and neighbors, and their observations about bumper stickers, yard signs, the resonance of a candidate’s message and the momentum they sense in their communities."

You can see how this explanation would appeal to an economist, because it's a parallel to the way that markets work:  combining scattered information in an optimal (or more realistically, pretty good) fashion.  But there's another possibility:  that voters are reflecting what the "experts" are saying, rather than information they have from their own lives.  Even if they're not paying close attention to the campaign, voters are likely to get a sense of what "everybody thinks" will happen.  In 2014, this would mean good predictions, because all the experts were saying that the Republicans would win big, while the polls left more doubt.  But there have been other campaigns in which the the experts were wrong, notably 1948.  Wolfers's explanation says that voters would have called that one correctly, or at least come close.

Did they?  In late September 1948, a Gallup poll asked "regardless of how you, yourself, plan to vote, which candidate do you think will carry this state:  Truman, Dewey, or Wallace?"  25% said Truman, 56% said Dewey, 15% don't know, and the rest Wallace or someone else (presumably Thurmond, who did carry several states in the South).  Of course, the polls were famously wrong in that year, but the survey also asked who they would vote for:  it was 46% for Dewey, 40% for Truman, 4% Wallace, 2% Thurmond.  Given that it was not a large margin for Dewey, it seems like voters' own stated preferences might have been better predictors of how their state would go.  Gallup did record state, so that could be checked, and I will do that in a later post.

PS:  After the election, only 19% said they had expected Truman to win.  

Wednesday, November 5, 2014

That man in Hartford

I get most of my election coverage from the NY Times.  It occurred to me the other day that I hadn't read much about the race for Connecticut governor, even though the Times has a lot of readers here and it was expected to be close.  I'm not sure, but I suspect that media coverage has shifted towards national politics over the years, and I wondered if that was reflected in a decline of knowledge of state politics in the general public.  There have been occasional survey question over the years on whether people could name the governor of their state.  The results:



Now that's what I call a trend.  I think that knowledge of basic facts about national politics has been stable or declined slightly, but nothing like this.

[Source:  Roper Center for Public Opinion Research]

Sunday, November 2, 2014

Do you believe?

Since 1985, a number of surveys (first by the Times-Mirror Corporation and later by Pew) have asked "How would you rate the believability of _____ on this scale of 1 to 4?"  The scale goes from 4 ("believe all or most of what they say" to 1 ("believe nothing").  I picked four publications:  the Wall Street Journal, USA Today, Time Magazine, and the New York Times, and summarized the results by the logarithm of positive (3 or 4) divided by negative (1 or 2).*



The obvious point is that ratings of the believability of all have declined.  Two other points that I'm not sure of, but are interesting possibilities:

1.  In 1985, the Wall Street Journal and Time were rated much higher than USA Today.  In the last ten years, there's been little difference among them--that is, the ones that started higher declined faster (unfortunately they didn't ask about the New York Times until 2004).

2.  The decline is pretty well approximated by a linear trend.  However, it also seems like there was an additional fall between 2002 and 2004 from which they haven't recovered.  It seems reasonable that some people would have felt they were misled after the Iraq war didn't go as smoothly as promised--and even though government officials were the original source of the misleading information, it the "believability" of news outlets would suffer.  


*In retrospect, I should have just taken the averages on the four point scale, but for reasons that I have forgotten I started by collapsing the scores into two groups.

Sunday, October 26, 2014

The Missing Right-of-Center Museum Goers

Ross Douthat had a post on "The Missing Right-of-center Media" in which he noted that "outlets that mix political and cultural coverage" had liberal readerships; the other side, which he didn't say explicitly, was that outlets with conservative readerships were pretty much exclusively focused on politics.  His proposed explanation is that "well-educated and well-informed conservatives are often businessmen (note the Wall Street Journal’s near-center position) whose reading interests are more practical and professional, who don’t cultivate a member-of-the-intelligentsia self-image, and who treat their media consumption mostly as a source of information rather than identity."
But political news and opinion is of no more practical value than cultural coverage (arguably less), so his explanation implies that there wouldn't be a significant audience for conservative political media, which there obviously is.   It seems to me that there are two possible explanations.

One is historical:  the "old media" model was to cover a range of topics, and most of the surviving "old media" have become politically liberal (the Wall St. Journal is an exception).  So they have a mostly liberal but ideologically mixed readership (in the case of the WSJ, mostly conservative but mixed)--some who read them mostly for the political coverage, but others who don't agree with their politics but read them for other features.  The "new media" model is to focus on one thing, and a number of conservative outlets have appeared to supply the demand.  They have an overwhelmingly conservative audience.

The other is that it reflects demand:   liberals like cultural coverage more than conservatives do.  Douthat says something like this, although in pejorative terms--the "liberal clerisy" likes to "cultivate a member-of-the-intelligentsia self-image." Of course, if there is a difference of this kind, you could explain in a way that's favorable to liberals, or take the sensible course and say that you don't know why it exists.

There are a few surveys that have asked about interest in "cultural" things.  One of them is a Pew survey from 2005, which asks people if they have visited the following in the last 12 months:
an art museum; a science or technology museum; a zoo or aquarium; a planetarium; a natural history museum; a public library.  Unfortunately, the only questions on political views are ideological self-rating (liberal... conservative) and party identification.  However, liberals are more likely to say that they have visited each one of those institutions.  The differences for science and technology museums are not statistically significant, but all of the others are, most by a comfortable margin.  The differences remain after controlling for education.  There is little or no difference by party identification.

As I've said in several posts, a lot of people don't seem to understand the terms "liberal" and "conservative," or to understand them in non-political ways.  But the relationships are so strong that it seems likely that liberals really are more interested in "culture."

[Data from the Roper Center for Public Opinion Research]

Monday, October 20, 2014

Earthquakes and inflation

In November 2010, an open letter to Ben Bernanke warned that quantitative easing would "risk currency debasement and inflation."  Inflation was running at an annual rate about 2% then, and is still about 2%.  A recent Bloomberg news story found that none of the signers of the letter said they had changed their mind:  9 stood by the letter, 13 didn't respond, and one had died.  The basic defense (among those who had a more or less coherent response) was that the letter just said that it was a risk, not a certainty.  Cliff Asness, who didn't respond to the Bloomberg reporters but later posted a reply, puts it this way: "if you believe the risk of an earthquake is 10 times normal, but 10 times normal is still not a high probability, it's rational to warn of this risk, even if the chance such devastation occurs is still low and you'll look foolish to some when it, in all likelihood, doesn't happen."

I was struck by the earthquake analogy, which I've seen before in discussions of inflation.  Earthquakes are sudden--you go from nothing to a disaster in a matter of minutes.  Another image that frequently comes up is "playing with fire," which is the same idea--it can suddenly go from apparently safely under control to completely out of control.  Is inflation really like that?

I  took annual data on inflation in 40 countries since 1950 (from the OECD) and divided it into six categories:  deflation, low (0-3%), medium (3-6%), fairly high (6-10%), high (10-20%), and very high (20+%).  My question was how often countries had gone from low to high inflation in the space of a year.  There were 574 nation-years with low inflation.  Of those, 5 were followed by a year of fairly high inflation, 3 by a year of high inflation, and none by a year of very high inflation.

The cases of a "jump":

Year 0    Year 1
2.9%        13.3%    India 1963-4 
3.0%        10.6%   Norway 1969-70
2.7%         10.5%   Canada  1950-51
2.7 %          7.0%   Sweden 1969-70
2.7%           6.5%   Finland  1970-1
3.0%           6.3%  Czech Rep. 2007-8
2.5%            6.3%  Indian 1978-9
2.8%            6.1%   New Zealand 1966-7

Six of those cases occurred in small nations and two in India, which I'm guessing did not have well developed institutions for economic management.  None occurred in the major economic powers.  France almost made the list, going from 3.1% in 1957 to 15.3% in 1958 (that was the year of the collapse of the 4th Republic).

So if there is a theory implying that quantitative easing created a substantial but not overwhelming probability (say 20% for each year the policy was followed) of a jump in inflation, it's not refuted by the experience of the last four years.  But a theory like that wouldn't fit the behavior of inflation in the past, in which jumps from low to high inflation are rare.


Tuesday, October 14, 2014

College and class

There have been a lot of articles lately about how students from working-class backgrounds are under-represented in American colleges and universities.  Most of them imply that this is to some extent new--that colleges used to be more inclusive.  The Roper organization did a survey of college students in 1949 which sheds some light on this issue.  It asked students about their father's occupation, using a fairly detailed classification.  Other Roper surveys of the general public taken at about the same time used the same classification, so you can compare the fathers to the general public:

                                                                                                 Students' Fathers
                          Public     Veterans  Non-vets
Professional               5.2%       12.4%     19.3%
Salaried-executive        10.7%       13.7%     18.5%
Proprietor-other          11.9%       15.3%     18.4%
Salaried-minor            10.4%       12.7%     15.3%
Proprietor-farm            5.1%        3.9%      5.4%

Wages-other                18.2%      11.7%      8.2%
Wages-factory              20.9%       8.3%      4.5%
Wages-farm                  6.1%       0.2%      0.4%

Students from middle-class (especially professional) backgrounds were substantially over-represented, but there was a difference between students who were veterans and those who weren't.   For example, among non-veterans, those whose fathers were professionals outnumbered those whose fathers were factory workers by more than 4:1; among students who were veterans, the ratio was less than 1.5:1.  Apparently the GI Bill of Rights had a big impact.

[Data from the Roper Center for Public Opinion Research]