Misinformation and Motivated Reasoning: Responses to Economic

Misinformation and Motivated Reasoning: Responses to Economic

Misinformation and Motivated Reasoning: Responses to Economic News in a Politicized Environment Brian F. Schaffner Cameron Roche University of Massach...

924KB Sizes 0 Downloads 4 Views

Recommend Documents

Framing, Motivated Reasoning, and Opinions About Emergent
James N. Druckman1 & Toby Bolsen2. 1 Department of Political .... on the considerations emphasized in the frame (Chong &

Demographic Responses to Economic and Environmental Crises
List of Contributors. Introduction. 1 Mortality Crises in Rural Southern Sweden 1766-1860. Tommy Bengtsson and Göran Br

Influence of motivated reasoning on saving and - University of Utah
Please cite this article in press as: Mishra, H., et al. Influence of motivated reasoning on saving and spending decisio

An Introduction to Economic Reasoning - Mises Institute
know why it's not a good idea to jump off a cliff; but an economy that runs well depends ..... Can true premises in an i

Economic reasoning and artificial intelligence - Harvard University
The field of artificial intelligence (AI) strives to build rational agents, capable ... economics may prove more relevan

Deductive Reasoning Deductive Reasoning and Inductive Reasoning
Deductive Reasoning and. Inductive Reasoning. “There has been a murder done, and the murderer was a man. He was more t

What Does It Mean to be Biased: Motivated Reasoning and - UCL
reception within cognitive and social psychology will necessarily remain subjective ..... Though undersampling, like the

Bayesian versus politically motivated reasoning in human perception
Oct 30, 2017 - in selective perception. Given the well-documented relationship between political predispositions in the.

The Influence of Partisan Motivated Reasoning on - Faculty Websites
Jul 4, 2013 - test varying party endorsements. For example, the Energy Policy Act of 2005 was originally sponsored by th

Dietary Supplements and Young Teens: Misinformation - Pediatrics
Jan 2, 2017 - extent health food stores would recommend and/or sell creatine and ... muscle strength contacted 244 healt

Misinformation and Motivated Reasoning: Responses to Economic News in a Politicized Environment Brian F. Schaffner Cameron Roche University of Massachusetts Amherst Accepted for publication at Public Opinion Quarterly

Abstract Public opinion scholars have recently focused on understanding why surveys report such high levels of misinformation among otherwise knowledgeable and engaged partisans. In this paper, we take advantage of a natural experiment involving the October 2012 jobs report announcement to gain a more complete understanding of how individuals’ beliefs are influenced by new salient information in a politicized environment. We examine reactions by Republicans and Democrats on a factual question about the unemployment rate immediately before and after the announcement that unemployment had fallen below 8% for the first time during the Obama presidency. Using a variety of techniques, including response latency measures, we conclude that partisans did react to the jobs report by engaging in motivated reasoning, providing a clearer understanding of why individuals respond to factual questions in vastly different ways. Word Count: 6,485

1

Introduction Information and knowledge are critical in driving political participation (Arceneaux and Nickerson, 2009; Nickerson, 2008; Delli Carpini and Keeter, 2007), opinion formation (Kinder, 2006; Lau and Redlawsk, 2001; Kuklinski et al., 2000), ideological coherence (Converse, 1964) and “correct voting” (Lau and Redlawsk, 1997), yet, Americans are often apathetic (Rosenberg, 1954) and uninformed (Delli Carpini and Keeter, 2007; Bartels, 1996) about politics. Worse still, large portions of the American electorate are either uninformed or misinformed regarding fundamental facts about political issues (Hochschild and Einstein, 2015, e.g.) and they often resist correcting their misinformation, even when prompted repeatedly to do so (Nyhan and Reifler, 2010; Berinsky, 2012).While several studies have demonstrated the effects of misinformation and motivated reasoning in controlled experimental settings (Kuklinski et al., 2000; Bullock, 2009; Nyhan and Reifler, 2010; Berinsky, 2012), questions still linger about the extent to which individuals process information in a biased way in a real political environment. It is also unclear whether individuals respond to survey questions with misinformation because they truly believe that information to be true, or because they are intentionally providing incorrect responses that bolster their partisan affiliations (Bullock et al., 2015, e.g.). This paper builds on a growing body of research on biased information processing and examines how motivated reasoning affects responses to factual conditions about unemployment conditions in reaction to a salient release of a jobs report during the height of a presidential campaign. We use the 2010-2012 Cooperative Congressional Election Study (CCES) panel survey to assess perceptions of the unemployment rate by more than 11,000 respondents in the two weeks immediately surrounding the announcement of a significant decline in unemployment a month before the 2012 presidential election. Respondents were asked to provide their own estimate as to the actual unemployment rate. Response timings were also recorded for each participant, allowing us to measure the cognitive effort expended by respondents in answering this question. Using entropy balancing, we compare responses before and after 2

the “treatment” of the jobs announcement to examine how partisans incorporated this new information into their responses to factual questions about the unemployment rate. The results reveal a notable divergence in how Democrats and Republcians reacted to the news. Responses among Democrats reveal a uniform updating of information in the direction of greater accuracy. Republicans, however, demonstrate a more mixed reaction to the report, with some Republicans becoming more accurate in their estimates, but an even larger share actually estimating unemployment to be even higher after the report that documented a decline in unemployment. An analysis of latent timings indicate that Republicans expended significantly more cognitive resources, and those who did spend longer answering the questions generally provided less accurate responses. These patterns are consistent with the notion that motivated reasoning was largely driving Republicans to provide estimates of the unemployment rate that were much higher than the true rate, either as a result of counter-arguing or expressive responding.

Partisanship and Information Processing Political scientists have, during the past two decades, dedicated significant attention to understanding why individuals tend to offer vastly different responses to factual questions about the political, economic, and social state of the world (Kuklinski et al., 2000; Bartels, 2002; Bullock, 2009; Nyhan and Reifler, 2010; Berinsky, 2012, e.g.). These studies have documented that highly engaged and knowledgeable partisans will frequently provide incorrect information about factual political questions such as whether the Affordable Care Act included a provision for “death panels,” whether the Bush Administration knew the 9/11 attacks were going to happen, or even about objective economic conditions. A prominent explanation for these patterns focuses on the role of motivated reasoning in leading individuals to process new information in biased ways. Motivated reasoning is a process in which an individual makes an active, cognitive effort to “arrive at a particular con-

3

clusion” (Kunda, 1990). When processing new information, individuals may be influenced by at least two goals – the desire to have accurate information and the desire to acquire information that confirms one’s prior beliefs or attitudes (Taber and Lodge, 2006). In the political realm, partisan (or directional) goals often win out over accuracy goals (Druckman and Bolsen, 2011; Campbell et al., 1960; Zaller, 1992; Bullock, 2009; Petersen et al., 2013), a pattern which can help explain why even knowledgeable partisans frequently provide incorrect factual information about politics or policies. This generally happens because many partisans counter-argue or reject entirely new information that challenges their pre-existing beliefs (Taber and Lodge, 2006). For example, Jerit and Barabas (2012) find that partisans were more likely to learn information if it had positive implications for their party, but were impervious to the new information if the implications for their favored party were negative. Notably, while individuals are motivated to process information in biased ways, they also may also face competing motivations when asked to reproduce that information in the context of a survey. This latter point makes it particularly difficult to discern whether individuals who process information with directional goals truly believe the misinformation that helps support those goals. For example, many conservatives may understand that Barack Obama was, in fact, born in the United States but choose to still express views that he is not a U.S. citizen as a way bolstering their opposition to him. Bullock et al. (2015) find evidence of this “expressive responding” in a series of experiments in which respondents were offered financial incentives for providing a correct response to a factual question about politics. Respondents who were offered the financial incentives demonstrated much less partisanmotivated misinformation on factual questions. Such “expressive responding” may help to explain why some individuals actually strengthen their misperceptions when presented with the correct information (Nyhan and Reifler, 2010; Gottfried et al., 2013). To gain more insight into how respondents approach their responses to survey questions, previous studies have made use of response latency measures (Mulligan et al., 2003). As an implicit measure of the time that it takes an individual to respond to a question, response

4

latency is an indicator of processing effort (Petersen et al., 2013; Huckfeldt et al., 2005; Huckfeldt and Sprague, 2000). Motivated reasoning requires more effort than simple memory recall because it requires that respondents consider information in relation to their own directional goals (Glas, 2015; Petersen et al., 2013). This is especially true when people are asked about information that runs counter to their directional goals; in this case, individuals “spend more time counterarguing and dismissing evidence inconsistent with prior opinions, regardless of their objective accuracy” (Druckman, 2012). Individuals may also use more cognitive effort in responding to a survey question when they engaged in the related process of “expressive responding.” In this situation, respondents are knowingly giving an incorrect response to a factual question as a way of supporting their partisan side. However, deciding to produce an inaccurate response and then giving that response generally takes more effort (time) than simply recalling the information requested (Walczyk et al., 2003). Thus, whether respondents are engaged in motivated reasoning or expressive responding (or both), the increased cognitive effort required by these processes should be evident in longer response times.

The 2012 October Jobs Report as a Natural Experiment During the 2012 presidential election campaign, the state of the national economic recovery was a matter or significant debate between Democrats and Republicans. Many Democrats, including President Obama, argued that the recovery was beginning to take hold as there were significant signs of improvement in economic indicators such as the unemployment rate. Republicans, championed by their presidential nominee Mitt Romney, asserted that the improvement was marginal at best and that economic indicators failed to support the claim that a strong recovery was happening. It was because of the heavily debated nature of the economic situation in 2012 and the primacy of that topic on the minds of voters that the October announcement regarding the

5

unemployment rate quickly drew substantial media attention. The New York Times report on the announcement underscored its political significance: The jobless rate abruptly dropped in September to its lowest level since the month President Obama took office, indicating a steadier recovery than previously thought and delivering another jolt to the presidential campaign. The improvement lent ballast to Mr. Obama’s case that the economy is on the mend and threatened the central argument of Mitt Romney’s candidacy, that Mr. Obama’s failed stewardship is reason enough to replace him.1 . The monthly jobs report is produced by the Bureau of Labor Statistics – a non-partisan government agency. As such, the information contained in that report should be viewed by most as coming from a credible source that lacks an ideological point of view. Yet, the information had clear political implications and the timing of the report (and the way in which it was reported on by the news media) served to further emphasize those political implications. This is especially true since several conservative elites and news outlets questioned the integrity of the report upon it’s release (Parsons, 2013). Former General Electric CEO Jack Welch famously tweeted upon release of the report: “Unbelievable jobs numbers..these Chicago guys will do anything..can’t debate so change numbers.” This led outlets such as Fox News to raise questions about the accuracy of the jobs report during its coverage while other news outlets covered the controversy about the veracity of the report (Weiner, 2012). Ultimately, the jobs announcement provides a unique instance in which most Americans were “treated” with the release of a politically salient piece of economic information. Additionally, this treatment originated from a government report covered widely by the news media; thus, the release of this report provides a unique opportunity to study how partisans engage with new information in a politically-charged environment. To demonstrate the importance of the jobs report release on the news cycle at the time, 1

Shaila Dewan and Mark Landler, “Drop in Jobless Figure Gives Jolt to Race for President,” New York Times 5 October 2012

6

Figure 1: The Volume of Google searchers for the Terms “Jobs Report” and “Unemployment Rate” (1/1/2004 - 2/2/2015)

0

Relative Google Search Interest 20 40 60 80

100

Jobs Report Release

2004

2008

Date

jobs report

2012 unemployment rate

Note: The graph shows the relative search frequency for the terms “unemployment rate” and “jobs report.” A value of 100 for a term means that during that week the term was searched more than at any other point during the entire period featured in the graph. Data Source: Google Trends (www.google.com/trends). Figure 1 shows a week-by-week picture of the relative volume of searches on Google for the terms “jobs report” and “unemployment rate” from 2004 through 2014. Notably, the week following the release of the jobs report saw the highest volume of searches for the term “unemployment rate” compared to any other week extending back to 2004. Searches for “jobs report” likewise spiked during this period. Thus, there is clear evidence that this jobs report was especially salient and helped to drive interest toward the unemployment rate in the weeks following the report’s release. The intensity of news coverage is also clear from a search of the Google news archives for the term “jobs report.” The jobs report released in early October 2012 received much 7

more coverage than any other jobs report released during that same year – over 4,000 news articles were archived by Google on that report. The next most heavily covered report was in January 2012, with just under 3,200 articles. Thus, the jobs report drew significant attention from the news media and it was covered in a way that helped to emphasize the political implications of the report.

Expectations Both before and after the release of the jobs report, the CCES panel survey asked respondents to indicate what they thought the unemployment rate was. We expect that Democrats became more likely to hold accurate information about the unemployment rate following the release of the jobs report. From a motivated reasoning account, doing so satisfies both of a Democrat’s potential motivations: 1) to hold the correct information and 2) to have information which is congruent with their political preferences. Republicans, however, may have processed the jobs report information in a biased way. Specifically, the fact that the unemployment report had negative implications for Republicans (and positive ones for the incumbent Democratic president) may have led some Republicans to dismiss or counterargue the new information. Accordingly, in the case of the Republicans, we might expect to see no change in their belief about the unemployment rate, or, if Nyhan and Reifler’s (2010) assertions are correct, we would expect to see Republicans become even more likely to provide misinformation after the report’s release. We might also expect to see more misinformation from Republicans if they were more likely to engage in “expressive responding” as a response to the unemployment report (Bullock et al., 2015) Two unique features of this data will allow for greater insights into the causal mechanisms behind reactions to the report. First, rather than simply falling into a categorical binary – correct or incorrect – we can examine the extent to which a respondent’s estimate departs from the actual unemployment rate. This provides us with more information about the

8

types of answers individuals are giving, particularly since some answers are farther from the truth than others. The second measure that we bring to bear on this question is response times. Specifically, we measured the amount of time that each respondent spent answering each question on the survey. Response latency timers are frequently used as measures of cognitive effort (Mulligan et al., 2003). We have three expectations in this regard. First, if respondents are simply responding with information that they believe to be factually accurate, then the response times for Democrats and Republicans should be similar. After all, in the absence of countervailing motivations, it should not take partisans from one side longer to recall information than partisans from the other side. However, if Republicans are engaged in motivated reasoning and are counter-arguing the economic news, then it should take them longer to respond to the question than Democrats who do not need to engage in such counter-arguing. Second, we also compare how response latency times change after the report. If respondents are simply recalling factual information in response to the question, then their timings should not change after the report. However, if responses are being driven by motivated reasoning, then we would expect to find an effect for the jobs report on response times. For Democrats, we expect response times to remain stable after the report. This should happen because the report is clearly favorable from their party’s perspective so they do not need to counter-argue the employment statistics to satisfy their directional goals. For Republicans, we expect response times to increase after the report. This is because Republicans will need to expend greater cognitive effort when faced with the factual unemployment questions after the report as they either counter-argue the information in pursuit of their directional goals or make a decision to provide an inaccurate expressive response to the question. Third, we expect response latency times to be predictive of the employment rate estimate given by respondents, especially for Republicans. In particular, since we expect that Republicans who take longer to answer the question are engaged in counter-arguing the economic situation or expressive responding, the longer they take the more likely it will be that

9

they over-estimate the unemployment rate (as a result of their counter-arguing or expressive responding).

Methodology We use the 2010-2012 CCES panel survey to test our expectations (Ansolabehere and Schaffner, 2014a).2 We focus specifically on the 2012 pre-election wave of the survey. In 2012, respondents were recruited from a pool of more than 50,000 individuals who had participated in the 2010 CCES. Those individuals selected for re-interview were solicited with an e-mail that asked them to “share their opinions in a new YouGov survey.” This wave of the survey went into the field on October 2nd and remained in the field for much of October.3 However, more than three-fourths of the interviews were completed by October 15th. The jobs report was released at 8:30 am (Eastern Time) on October 5th. At that point, 7,294 respondents had already taken the survey; 11,706 would take the survey after the jobs report was announced. We confine our analysis to those individuals who took the survey before October 16th in order to limit the scope of our inquiry to individuals interviewed within 10 days of the jobs report. Ultimately, we included 11,594 respondents from this period in our analysis.4 Figure 2 shows a density plot of the timing of when these respondents finished the survey. The figure shows that a large proportion of respondents completed the survey during three days before the jobs report announcement on October 5th. This was the initial launch period for the survey when the first large wave of recruitment appeals went out to panelists. During 2

The panel survey data are available for download at http://dx.doi.org/10.7910/DVN/24416. The CCES survey was conducted online by YouGov using a matched sample design. See Ansolabehere and Schaffner (2014b) for more information on the validity of this survey methodology. YouGov attempted to re-interview 56,626 individuals who took the 2010 CCES. Interviews were successfully completed with 29,182 individuals, for a retention rate of 53%. This group was then matched down to a final nationally representative sample of 19,000 respondents. Interviews were conducted from October 2nd through November 5th, 2012. 4 This is our sample size after dropping (1) respondnets who did not identify with one of the two parties, (2) respondents for whom we had missing data on key co-variates and (3) respondents for whom response latency times were trimmed, as described below. 3

10

0

5

Percent 10

15

20

Figure 2: The Distribution of Interview Completion Dates for CCES Panel Study (10/2/2012 - 10/15/2012)

10/2/12

Jobs announcement

10/15/12

Interview End Time

Note: Graphic includes 11,594 respondents to pre-election wave of the CCES Panel Study. the next four days (October 5th to October 8th), only a few hundred respondents completed the survey. However, a second wave of appeals to panelists generated more responses from October 9th through October 15th. The panel study asked respondents to answer questions that tapped their knowledge of current affairs. The first page asked the respondent to indicate which news sources he or she used. The respondent was then asked whether the economy had improved or gotten worse during the previous year. The next question was the one that we focus on in this study: The unemployment rate is the percent of people actively searching for work but not presently employed. Since World War II it has ranged from a low of 2 percent to a high of 12 percent. What is your best guess about the unemployment rate in the United States today? Even if you are uncertain, please provide us with your best estimate of the percent of people seeking work but currently without 11

0

5

Percent 10

15

20

Figure 3: The Distribution of Responses to the Unemployment Rate Question

0

20

40 60 Unemployment Rate

80

100

Note: Graphic includes distribution of responses to question asking individuals to provide the current national unemployment rate. N =11,594 respondents to pre-election wave of the CCES Panel Study. a job in the United States. Responses to this question ranged from 0 to 100 and are shown in Figure 3. Note that most respondents offered a response that was relatively close to the actual unemployment rate; about half estimated the rate to be somewhere between 7.6% and 8.2% (recall that the actual rate was 8.1% before the report and 7.8% after). The mean estimate, however, was 11.42%, driven by some very large outlier values. To reduce the effect of extreme (and unrealistic) outliers, we exclude individuals who provided an estimate above 20% (about 4% of the sample).5 After excluding these outliers, the average guess by a Republican respondent was 9.8%, compared to 8.5% for the average Democrat. In addition to examining the responses to this question, we also utilize page timings 5

The patterns we uncover do not change notably when we keep all respondents in the analysis.

12

to provide insight into how respondents processed information when they answered this question. Specifically, YouGov measures the amount of time a respondent spends on each page of the survey down to thousandths of a second. The question about the unemployment rate resided on its own page, so the amount of time a respondent spent on that page is generally equivalent to how long they spent reading and answering that question. In taking this approach, we follow a long tradition of using response latency measures to understand how respondents are processing information when answering a question (Mulligan et al., 2003). The median page time for the unemployment rate question was 20.064 seconds. However, the page timing had a very long tail; this is because some respondents leave the survey and come back to it later, and those very long page timings reflect that issue (Ansolabehere and Schaffner, 2015). Thus, following an approach taken by previous studies (Ratcliff, 1993), we dropped respondents who registered page timings beyond the 95th percentile value. This means that for the unemployment rate question, any individuals with timings that were longer than 54.82 seconds were dropped from the analysis. However, in the appendix we present a robustness check using two other approaches for dealing with outlier times (logging and ranking) and find similar results as what we present in the body of the text. In utilizing page timings as a measure of response latency, it is also important to control for some measure of each individual’s baseline response time – that is, the speed at which each respondent generally answers survey questions. To calculate this, we take a selection of page timings for eight questions asked relatively proximate in the questionnaire to those that we analyze in this paper.6 From this selection of page timings we calculate a mean value for each respondent. We use this value to control for each individual’s baseline response time in the analysis presented below. 6

See the appendix for more information about these baseline page timings.

13

Treatment To determine the effect of the jobs report on how respondents answered the factual question about the unemployment rate, we compare individuals completing the survey before and after the release of the report. YouGov records the exact time at which respondents initiate and complete a survey. We use the time and date at which a respondent completed the survey to determine whether that respondent could have heard news of the jobs report before answering the question. Specifically, respondents who completed the survey before 8:30am (Eastern Standard Time) on Friday, October 5th were coded as taking the survey before the announcement; those completing the survey after that date/time were coded as taking the survey post-announcement.7 While the jobs report was an exogenous treatment, it was not randomly assigned to respondents. Respondents were all invited to take the survey prior to the release of the jobs report and they controlled when they did complete the survey. As noted, many respondents completed the survey when they were first invited to do so, but others did not respond immediately to the first appeal. The potential confound is that the timing of when a respondent completed the survey may also be correlated with her knowledge of politics. Specifically, more politically engaged respondents might have been more likely to complete the questionnaire immediately upon being invited, while those with less interest in politics may have required more solicitations before responding. The first two columns of entries in Table 1 compare the composition of the pre- and postjobs report sample on a number of characteristics that tend to be correlated with political knowledge (as well as one direct measure of knowledge). Notably, the pre-report sample does have the characteristics of a more politically engaged group. Those answering the survey before the jobs report were less likely to be racial and ethnic minorities, more educated, 7

Of course, the identification in this case is not fully precise. After all, respondents who took the survey within a few minutes or hours of the announcement may have had little opportunity to learn of the news. However, the treatment effects are persistent long after the announcement, indicating that a fully precise identification of timing is not crucial for the substantive results.

14

more male, more likely to report that they follow politics “most of the time,” and older. Interestingly, the samples did not appear to differ significantly on ideological or partisan grounds. However, individuals who responded to the survey before the jobs report had an average baseline response time that was about .4 seconds longer than those who responded after the jobs report. Due to the imbalance of the pre- and post-report samples, we use entropy balancing weights to impose balance on the pre- and post-report groups in order to account for factors that would be correlated with political knowledge. Entropy balancing is a technique that re-weights the sample to ensure that the control group’s characteristics are equivalent to the treatment group on the specified co-variates (Hainmueller, 2011). We implemented the entropy balancing routine to ensure that the pre- and post-report groups were balanced on the mean, variance, and skewness of each specified variable. We use entropy balancing because it is more efficient than matching techniques, in that it does not discard observations. However, in the Appendix we show that we get similar results when we use coarsened exact matching. Fortunately, the large-N nature of this dataset makes it possible for us to balance on a large number of co-variates. Specifically, we balanced on the following variables – race, gender, ideology, education, interest in politics, partisanship, age, and whether the respondent is a validated voter.8 Additionally, we leverage the panel nature of the dataset to match on a respondent’s level of political knowledge when they took the 2010 wave of the survey. Specifically, the variable we use is the number of questions the respondent answered correctly when asked seven basic questions about politics.9 Finally, we also balanced the sample on the respondent’s baseline response time (described above and in the Appendix). After conducting the balancing, we achieved a high degree of balance on all of the co8

The CCES panel survey was matched to voter files to validate whether respondents were actually confirmed as voters. 9 Those questions were: (1) which party had a majority in the House of Representatives and (2) the Senate, and which party the respondent’s (3) member of Congress, (4) Governor, and (5 and 6) Senators affiliated with. The 7th knowledge item was whether the respondent placed the Democratic Party as more liberal on the ideological scale as compared with the Republican Party.

15

Table 1: Comparison Between Respondents Completing Survey Before and After Jobs Report Announcement Before Balancing After Balancing Pre-Report Post-Report Pre-Report Post-Report Black 0.0416 0.137 0.0982 0.0982 Latino 0.0384 0.122 0.0880 0.0880 High School College

0.251 0.412

0.438 0.262

0.257 0.405

0.284 0.387

Male

0.547

0.503

0.515

0.515

Very Liberal Liberal Moderate Conservative Very Conservative

0.0476 0.195 0.336 0.327 0.0801

0.0584 0.205 0.325 0.296 0.0780

0.0937 0.193 0.246 0.285 0.182

0.0950 0.188 0.254 0.280 0.183

Follow politics most of the time

0.657

0.527

0.721

0.721

Strong Partisan

0.431

0.460

0.536

0.536

Under 40 Over 60

0.161 0.465

0.350 0.263

0.136 0.408

0.147 0.389

Voter

0.789

0.704

0.812

0.812

Correct answers in 2010

6.042

5.397

6.003

6.003

Baseline Page Timing

12.20

11.82

12.31

12.31

Note: Entries are the proportion of each group taking on each characteristic except the last two rows, which are the average number of factual questions (out of 7) answered correctly in 2010 and the average time it took respondents to answer 8 baseline questions in the survey. The first two columns of entries are calculated using the post-stratification weights. The second two columns of entries are calculated using the weights from the entropy balancing algorithm. N = 5,916 respondents in pre-report gropu and 5,713 in post-report group.

16

variates. The last two columns in Table 1 compare the composition of the pre- and postreport samples after using the entropy balancing weights. Note that on every measure, the difference between the pre- and post-report samples is nearly zero. In order to ensure that there are no further confounding differences between pre- and postreport respondents that may affect how they answer questions tapping political knowledge, we also analyze the answers to three items as placebo tests. Specifically, we use questions asking respondents which party controlled the House of Representatives and the U.S. Senate in 2012. We also use a knowledge question that is less partisan in nature; that question asks individuals whether their state had gained, lost, or seen no change in their congressional districts with the 2012 reapportionment. Assuming we have properly imposed balance between the pre- and post-treatment groups, we expect that while the jobs announcement should have a significant effect on how respondents answer the question about the unemployment rate, it should not affect responses on these other knowledge questions. If we do find significant differences on these general knowledge questions, then it would raise concerns that we are not fully accounting for differences between the two groups of respondents.10 In the analyses that follow, we compare Democratic respondents to Republican respondents. In doing so, we include as partisans both individuals who identify with the party as well as those who are independents who lean towards one of the parties. We exclude the 11% of respondents (N = 1,491) who were “true independents” from our analysis. In the Appendix, we present results from a model demonstrating that the party-based effects are robust when controlling for other variables in a multivariate model.

Results Table 2 presents the treatment effects for the unemployment rate question and the three general knowledge questions which we treat as placebo tests. We calculate the treatment effects 10

81% of respondents correctly identified that party that controlled the U.S. House in 2012, 76% answered the question about party control of the Senate correctly, and 38% provided a correct response to the reapportionment question.

17

separately for four different partisan groups – strong Democrats, weak/leaning Democrats, weak/leaning Republicans, and strong Republicans. Directional goals may be more important for strong partisans than they are for weak or leaning partisans since strong partisans feel a stronger sense of identity with their party and thus are more likely to want to reason in a way that supports that identity. The first column of treatment effects in Table 2 corresponds to respondents’ estimates of the actual unemployment rate. Since both Democrats and Republicans provided estimates that were, on average, significantly higher than the actual unemployment rate, a negative effect in this column would indicate estimates that moved closer to being correct on average. This is what we find for Democratic respondents. After the jobs report, strong Democrats provided an estimate of the unemployment rate that was more than one-third of a point lower than what they estimated before the report. For weak and leaning Democrats, the effect is somewhat stronger – over half a point reduction. Republicans responded in the opposite direction as Democrats after the report. Both strong and weak/leaning Republicans provided estimates of the unemployment rate that were about one-third of a point higher after the report. Thus, after the release of the jobs report, Democrats and Republicans were providing estimates of the unemployment rate that were actually substantially farther apart than what they had been providing before the report’s release. Democratic estimates became, on average, more accurate while Republican estimates were less accurate on average. The result for the unemployment rate question supports our expectations – Democrats responded to the unemployment report by updating their beliefs in a way that satisfied both accuracy and directional goals. Republicans reacted to the information by adjusting their expressed beliefs in the opposite direction, thereby strengthening their misinformation. Two points are worth making here. First, the magnitude of the treatment effects are relatively similar for Democrats and Republicans. Thus, Democrats moved about the same distance in lowering their estimate of the unemployment rate as Republicans did in raising their

18

Table 2: Treatment Effects for Democratic and Republican Respondents Unemployment Placebo Tests Rate Guess Senate House Apportionment Strong Democrats -0.367* -0.046 -0.041 0.016 (0.164) (0.024) (0.022) (0.023) (N = 3,232) Weak/Leaning Democrats -0.524*** -0.061* -0.071** -0.036 (0.144) (0.029) (0.024) (0.026) (N = 2,307) Weak/Leaning Republicans 0.325** 0.025 -0.013 -0.042 (N = 3,022) (0.124) (0.020) (0.019) (0.028) Strong Republicans 0.355** 0.012 -0.012 -0.020 (0.124) (0.021) (0.020) (0.024) (N = 3,068) Note: *p<.05, **p<.01, ***p<.001. estimate. Second, the increased estimates given by Republicans are particularly noteworthy given that the jobs report documented a .3% decrease in the actual unemployment rate. When that decline is factored in with the treatment effects shown in Table 2, Republicans provided estimates of the unemployment rate that were about two-thirds of a point less accurate after the report as they were before the report. Of course, the validity of the treatment effects presented in Table 2 relies on the assumption that after matching, the respondents answering the survey before the jobs announcement were equivalent to those answering after the report. While we matched on a wide array of factors that we believe would be associated with knowledge of the unemployment rate, we test the strength of this assumption with three placebo tests. Specifically, in the final three columns of Table 2, we present the treatment effects for three knowledge questions that are unrelated to the jobs report – which party controls the House, which party controls the Senate, and whether a respondent’s state gained or lost districts after reapportionment. A positive treatment effect for these columns would mean that respondents taking the survey after the jobs announcement were more likely to answer those questions correctly, while negative treatment effects would indicate that they were less likely to answer correctly. The results in the last three columns indicate that there were relatively small differences in knowledge on these placebo questions and in the only case where those differences were statistically significant they indicate that any remaining imbalance between the pre-report 19

and post-report would be biased against our expected result. Specifically, for weak and leaning Democrats, respondents answering the survey after the jobs report were slightly less likely to get the party control questions correct. This difference is opposite of the effect for the unemployment questions, where Democrats were more likely to answer correctly after the jobs report. All other differences on this side of the table lacked statistical significance and were relatively small. While Table 2 examines the change in the point estimates for each question after the jobs report was released, those point estimates may be masking important patterns in the distribution of responses for the unemployment rate question. Figure 4 compares the distribution of responses for this question among Democrats and Republicans answering before and after the jobs report. The vertical reference line in each plot shows where the actual unemployment rate was after the jobs report (7.8%). On the left side of the figure is the plot for Democratic respondents. The distribution takes a similar shape both before and after the jobs announcement – the center of the distribution simply shifts to the left as Democrats provided lower estimates of the unemployment rate after the report. Indeed, note that the modal response for Democrats responding after the report was 7.8% – 42% of Democrats gave this (correct) rate as their response and a full two-thirds of Democrats came within .2 points of the actual rate. Republicans, on the other hand, demonstrated more variance in their responses before the report and that variance only increased after the report. Indeed, there seems to be more heterogeneity in how Republicans responded to the report. First, we observe a slight shift downward in the mode of the distribution, indicating that some Republicans did respond to the report by changing their estimate of the unemployment rate downward (about 40% of Republicans provided a response of 7.8% or 8% after the report). However, there was also a reduction in the proportion of Republicans near the modal response category and an increasing number of Republicans in the upper tail of the distribution. In other words, Republican responses became more dispersed after the jobs report – whereas fewer than 25%

20

Figure 4: Distribution of Estimates of Unemployment Rate

.4 Density .2 .3 .1 0

0

.1

Density .2 .3

.4

.5

Republicans

.5

Democrats

0

5

10 Estimated Rate

Pre-report

15

20

0

Post-report

5

10 Estimated Rate

Pre-report

15

Post-report

Note: Graphic shows kernel density plots for Democratic and Republican respondents according to whether they answered the survey before or after the report. N = 5,526 Democrats and 6,068 Republicans. Vertical reference lines drawn at 7.8%.

21

20

of Republicans offered an estimate between 10% and 15% before the jobs report, more than one-third of Republicans did so in the wake of the report. Thus, some Republicans adjusted their estimates to be more accurate after the report while other Republicans adjusted their estimates to be more inaccurate.

Analysis of Latent Response Timings So far, we have demonstrated that Republicans and Democrats did react to the new information from the jobs report in distinct ways. While the patterns of responses provide evidence about how Democrats and Republicans responded to new information in distinct ways, the latent response timings will provide more insight into the cognitive processes that were driving these responses. Figure 5 shows the average amount of time it took Democrats and Republicans to answer three different pages of the survey before and after the jobs report announcement. The analysis controls for an individual’s average baseline response time – accordingly, the estimates in the figures are the predicted page timings for a respondent who had an average baseline response time. The first panel in the figure presents the average page timings for the unemployment rate question. The first important pattern to note is that Democratic respondents answered the question much quicker than Republican respondents. This difference is consistent with how we would expect partisans to approach these questions if they are engaged in motivated reasoning. For Democrats, the same response would satisfy accuracy and directional reasoning goals, thus there is no need to counter-argue or consider an expressive response, all that is needed is simple recall of the information. This is consistent with faster response times. Republican respondents, on the other hand, face conflicting considerations, since satisfying an accuracy goal would conflict with their directional goals. Adjudicating between these two goals should take more cognitive effort, which is reflected in the longer response times. A second pattern to observe in Figure 5 is whether page timings were influenced by the 22

Figure 5: Average Time Spent on Questions

Pre-report

Post-report

Apportionment Question Page timing (seconds) 12 13 14 15 16

Page timing (seconds) 17 18 19 20 21

Party Control Questions

11

16

19

Page timing (seconds) 20 21 22 23 24

Unemployment Rate Question

Pre-report

Post-report

Pre-report

Post-report

Republican

Republican

Republican

Democrat

Democrat

Democrat

Note: Graphic shows average response times on questions controlling for each respondent’s baseline response time. N = 11,629 respondents to the pre-election wave of the CCES Panel Study. Vertical bars represent 95% confidence intervals. introduction of the jobs report. We would expect this to happen if the new information provided by the jobs report made cognitive processing easier or harder for either group of respondents. We observe no significant change in the average processing time for Democrats, but there is a statistically significant increase in the average time for Republicans. A Republican with an average baseline response time took .73 seconds longer to answer this question in the post-report period compared to before the report’s release (p = .012). The second panel in Figure 5 shows the timing for the page asking respondents about party control in the House and Senate as well as in the lower and upper chamber of their state’s legislature and the third panel shows the page timing for the question asking respondents whether their state had lost or gained congressional districts in from the most recent reapportionment. Unlike with the unemployment question, we would expect negligible differences between the parties in response times on these two pages. Thus, we include these panels as a baseline in order to establish whether there are any general differences among partisans on response times for knowledge questions. The second panel indicates that there are no such differences – the estimates overlap almost completely and show no sign of chang-

23

ing for either group from the pre- to post-report period. The pattern in the third panel is that Republicans were slightly slower in answering the reapportionment question compared to Democrats. But the difference is substantively small (four-tenths of a second), especially in relation to the large difference found for the unemployment rate question (approximately two full seconds). Additionally, there was no significant change in the rates after the jobs report was released. Thus, overall the difference we find in page timings for the unemployment question does not appear to reflect a more general trend on knowledge questions; instead, they indicate clearly that Republicans were spending more cognitive resources on the unemployment rate question, a gap that grew even wider in the post-report period. As we consider the meaning of these response times, it is useful to recall that our expectation is that longer response times result especially from Republicans who are engaged in motivated reasoning to counter-argue the current unemployment rate with the aim of pursuing a directional goal in responding. If this expectation is correct, then we would expect to find a relationship between the amount of time taken on this question and the nature of the response given. Specifically, respondents who take longer – and particularly Republican respondents who take longer – should be more likely to provide an inaccurate response to the unemployment rate question. Figure 6 shows the relationship between time spent on the unemployment question and the response given to that question for Democrats and Republicans during both the preand post-report periods. The lines in the graphic are fitted using fractional polynomials. Note that in both periods, Republicans who took longer on the question were also more likely to provide a higher estimate of unemployment than those who answered the question more quickly. This suggests that Republicans that took longer on the survey were generally engaged in counter-arguing the true unemployment rate as a way of supporting their directional goals. This pattern was fairly consistent both before and after the relase of the jobs report.11 11

While the fitted lines turn downard at very high response times, the confidence intervals in this range indicate greater uncertainty about whether this change in direction is actually occurring. However, it may

24

Figure 6: Relationship Between Time Spent on Unemployment Question and Response Given

Estimate of Unemployment Rate 8 9 10 7

7

Estimate of Unemployment Rate 8 9 10

11

After Jobs Report

11

Before Jobs Report

0

5

10 15 20 25 30 35 40 45 50 55 Latent Response Time (Seconds) Republicans

0

Democrats

5

10 15 20 25 30 35 40 45 50 55 Latent Response Time (Seconds) Republicans

Democrats

Note: Graphic shows fractional polynomial plot of time spent on unemployment rate question and estimate given by respondent. N = 11,629 respondents to pre-election wave of the CCES Panel Study. Shaded areas represent 95% confidence intervals.

25

Notably, Democrats responding before the release of the jobs report were also more likely to give a higher rate when they took longer to answer the question, though this effect was smaller than it was for Republicans. But after the report was released, the relationship between time spent on the survey and the response given is nearly flat for Democrats. Thus, even Democrats engaged in more effortful cognition when answering the question appeared to respond with a similar level of accuracy as those who responded quickly.

Conclusion To summarize, we find that for both Republicans and Democrats, the jobs announcement had a significant effect on their expressed beliefs about the unemployment rate. Democratic respondents were more likely to adjust their estimates of the unemployment rate downward, in line with the information from the report. Conversely, Republican respondents actually reacted to the announcement in diverse ways. Some Republicans did provide a more accurate response following the report; but a substantial proportion of Republicans responding after the report actually provided higher estimates of the unemployment rate compared to those who had responded before the report. Our analysis of response latency times provide strong evidence that Republicans were engaged in more effortful processing when responding to this question, particularly after the jobs report’s release and Republicans who expended more effort on this question tended to provide the least accurate responses. Thus, our study sheds important light on the mechanisms behind why individuals provide persistent misinformation in response to factual questions with clear political implications. Given the longer response times for Republicans for these questions (but not for other factual political questions), it seems clear that Republicans were engaged in counter-arguing the information about the actual unemployment rate and actually giving answers that were even less accurate following the report. This is also consistent with recent work on expressive be the case that some individuals with very long response times are indicative of respondents who briefly left the survey to lookup the answer online.

26

responding, which shows that partisans are less likely to provide misinformed responses when they are paid to give an accurate answer (Bullock et al., 2015). In either case, this extra cognitive effort suggests that a significant proportion of Republicans were answering this question by doing more than simply recalling information that they believed to be true. One possibility is that Republicans answering this question recalled news about the unemployment rate and then counter-argued it before giving a response (I know I saw that the rate had dropped to 7.8%, but that can’t be correct because it runs counter to what I think of Barack Obama’s effectiveness as president, so it must be higher). Another possibility is that Republicans believe the reported unemployment rate is correct, but choose to give a higher response as a way of cheerleading for their side (I know the rate is 7.8%, but that is a good number for Democrats, so I’m going to say it is actually higher to express my opposition to Barack Obama).12 Regardless of which explanation is correct (most likely it is a mixture of both), this suggests that there may be less cause for concern about the high levels of misinformation frequently reported in surveys. Indeed, much of the observed misinformation seems to be driven by survey respondents attempting to deal with information in a way that preserves their partisan directional goals. In other words, Americans may not be as misinformed about politics as factual questions sometimes imply – they simply choose to respond to some factual questions about politics in a way that helps them preserve their partisan identities. Thus, it is not necessarily the case that Democrats and Republicans each have their own set of facts, but rather that political conditions generally make one side more motivated to acknowledge particular facts than the other at any given point in time. 12

This is not likely a Republican effect. Bartels (2002) shows that Democrats were similary impervious to acknowledging favorable economic conditions at the end of the Reagan presidency.

27

References Ansolabehere, Stephen and Brian F. Schaffner. 2014a. “2010-2012 CCES Panel Study.”. URL: http://dx.doi.org/10.7910/DVN/24416 Ansolabehere, Stephen and Brian F Schaffner. 2014b. “Does survey mode still matter? Findings from a 2010 multi-mode comparison.” Political Analysis 22(3):285–303. Ansolabehere, Stephen and Brian F Schaffner. 2015. “Distractions: The Incidence and Consequences of Interruptions for Survey Research.” The Journal of Survey Statistics and Methodology 3(2):216–239. Arceneaux, Kevin and David W. Nickerson. 2009. “Who Is Mobilized to Vote? A Re-Analysis of 11 Field Experiments.” American Journal of Political Science 53(1):1–16. Bartels, Larry. 1996. “Uninformed Votes : Information Effiects in Presidential Elections.” American Journal of Political Science 40(1):194–230. Bartels, Larry M. 2002. “BEYOND THE RUNNING TALLY: Partisan Bias in Political Perceptions.” Political Behavior 24(2):117–150. Berinsky, Adam J. 2012. “ Rumors, truths, and reality: A study of political misinformation.”. Bullock, John G. 2009. “Partisan bias and the Bayesian ideal in the study of public opinion.” The Journal of Politics 71(03):1109–1124. Bullock, John G., Alan S. Gerber, Seth J. Hill and Gregory A. Huber. 2015. “Partisan Bias in Factual Beliefs about Politics.” Quarterly Journal of Political Science . Campbell, Angus, Philip Converse, Warren Miller and Donald Stokes. 1960. The American Voter. The University of Chicago Press. Converse, Philip. 1964. “Nature of Belief Systems in Mass Publics.” Critical Review 18(1).

28

Delli Carpini, Michael X. and Scott Keeter. 2007. What Americans Know about Politics and Why It Matters. Yale University Press. Druckman, James N. 2012. “The politics of motivation.” Critical Review 24(2):199–216. Druckman, James and Toby Bolsen. 2011. “Framing, Motivated Reasoning, and Opinions About Emergent Technologies.” Journal of Communication 61(4):659–688. Fazio, Russell H. 1990. “A practical guide to the use of response latency in social psychological research.” Research methods in personality and social psychology 11:74–97. Glas, Jeffrey M. 2015. Cognitive Resources and Political information Processing PhD thesis. Gottfried, J., B. W. Hardy, K. M. Winneg and K. H. Jamieson. 2013. “Did Fact Checking Matter in the 2012 Presidential Campaign?” American Behavioral Scientist 57(11):1558– 1567. Hainmueller, J. 2011. “Entropy Balancing for Causal Effects: A Multivariate Reweighting Method to Produce Balanced Samples in Observational Studies.” Political Analysis 20(1):25–46. URL: http://pan.oxfordjournals.org/cgi/doi/10.1093/pan/mpr025 Hochschild, Jennifer and Katherine Levine Einstein. 2015. Do Facts Matter?: Information and Misinformation in American Politics. University of Oklahoma Press. Huckfeldt, Robert, Jeffery J. Mondak, Michael Craw and Jeanette Morehouse Mendez. 2005. “Making sense of candidates: Partisanship, ideology, and issues as guides to judgment.” Cognitive Brain Research 23(1):11–23. Huckfeldt, Robert and John Sprague. 2000. “Political consequences of inconsistency: The accessibility and stability of abortion attitudes.” Political Psychology 21(1):57–79. Iacus, Stefano M., Gary King and Giuseppe Porro. 2012. “Causal inference without balance checking: Coarsened exact matching.” Political Analysis 20(1):1–24. 29

Jerit, Jennifer and Jason Barabas. 2012. “Partisan Perceptual Bias and the Information Environment.” The Journal of Politics 74(03):672–684. URL: http://www.journals.cambridge.org/abstract S0022381612000187 Kinder, Donald. 2006. “Belief Systems Today.” Critical Review 8(2006):197–216. Kuklinski, James H, Paul J Quirk, Jennifer Jerit, David Schwieder and Robert F Rich. 2000. “Misinformation and the Currency of Democratic Citizenship.” Journal of Politics 62(3):790–816. Kunda, Ziva. 1990. “The Case for Motivated Reasoning.” Psychological Bulletin 108(3):480– 498. Lau, R. and D. Redlawsk. 2001. “Advantages and Disadvantages of Cognitive Heuristics in Political Decision Making.” American Journal of Political Science 45(4):951–971. Lau, Richard R and David Redlawsk. 1997. “Voting Correctly.” American Political Science Review 91(3):585–598. Mayerl, Jochen. 2013. “Response Latency Measurement in Surveys. Detecting Strong Attitudes and Response Effects.” Survey Methods: Insights from the Field (SMIF) . Mulligan, Kenneth, J. Tobin Grant, Stephen T. Mockabee and Joseph Quin Monson. 2003. “Response latency methodology for survey research: Measurement and modeling strategies.” Political Analysis 11(3):289–301. Nickerson, David W. 2008. “Is Voting Contagious? Evidence from Two Field Experiments.” American Political Science Review 102(01):49–57. Nyhan, Brendan and Jason Reifler. 2010. “When Corrections Fail: The Persistence of Political Misperceptions.” Political Behavior 32(2):303–330. Parsons, Richard A. 2013. “A question of bias in the US unemployment numbers.” Applied Economics Letters 20(10):1003–1007. 30

Petersen, Michael Bang, Martin Skov, Sø ren Serritzlew and Thomas Ramsø y. 2013. “Motivated Reasoning and Political Parties: Evidence for Increased Processing in the Face of Party Cues.” Political Behavior 35:831–854. Ratcliff, R. 1993. “Methods for dealing with reaction time outliers.” Psychological bulletin 114(3):510–32. Rosenberg, Morris. 1954. “Some Determinants of Political Apathy.” Public Opinion Quarterly 18(4):349–366. Taber, C. and M. Lodge. 2006. “Motivated Skepticism in the Evaluation of Political Beliefs.” American Journal of Political Science 50(3):755–769. Walczyk, Jeffrey J, Karen S Roper, Eric Seemann and Angela M Humphrey. 2003. “Cognitive mechanisms underlying lying to questions: Response time as a cue to deception.” Applied Cognitive Psychology 17(7):755–774. Weiner, Rachel. 2012. “Jack Welch accuses Obama of cooking jobs numbers.” Washington Post . Zaller, John. 1992. The Nature and Origins of Mass Opinion. Cambridge Univ Press.

31

Appendix In this Appendix we first explain how we calculate our baseline response time used in our latent response timings analysis. We then conduct several robustness checks on the analyses presented in the paper. First, we use two different approaches for dealing with exceptionally long response times for the unemployment rate question. Second, we replicate all analyses using a different approach to balancing the pre- and post-report groups of respondents. And, third, we estimate multivariate regression models to examine whether the party-conditioned effects reported in the paper are robust after controlling for other possible explanations. In all instances, the results are robust to these alternative approaches.

Calculating Baseline Response Times for Latent Response Analysis As Fazio (1990) notes, properly analyzing response latency times requires controlling for the baseline rate at which an individual responds to survey questions. We control for baseline response rates in our analysis by calculating the average of the response times for each respondent to a series of 8 questions that generally appeared in relatively close proximity to the questions we analyze in this paper. We use a selection of 8 questions for several reasons. First, using the overall response time for the survey is problematic because some people must answer more questions than others (due to follow-ups, etc.) and some people stop the survey and come back to complete it later. Second, we were focused on finding a set of questions that would be particularly comparable across respondents. For example, questions that asked respondents to “select all that apply” rather than provide a single response would not be comparable across respondents since those who want to select more options would necessarily take longer. Thus, we limited our scope to only single choice questions that all respondents were asked to answer, and we attempted to select questions that appeared in close proximity to the primary questions we use in our analysis (i.e. the unemployment rate questions and the placebo questions).

32

Table A.1: Questions used to create average response time baseline Question Median Time All things considered do you think it was a mistake to invade Iraq? 6.57 All things considered do you think it was a mistake to invade Afghanistan? 4.99 What is the gender of.. (U.S. House member, members of state leg., governor) 22.37 Do you approve of the way each is doing their job... (pres., cong., Sup. Court, Gov., state leg.) 20.22 In general, do you feel that the laws covering the sale of firearms should be made more strict, less strict, or kept as they are? 9.09 How well do you think [CURRENT REP NAME] represents the people in your congressional district? 6.88 Has your member of Congress brought any special projects back to your area? 6.76 Often, redistricting disputes are settled in the courts. Do you trust the courts in your state to decide redistricting fairly? 8.29 The questions we used to calculate our average baseline rate are listed in Table A.1 along with their median response times. From this list, we calculated an individual’s average response time to the set of 8 questions. As noted in the paper, some response times are very long (often due to a respondent taking a break from the survey). To deal with this issue, when an individual’s response time was beyond the 95th percentile of all times for that question, we replaced that respondent’s time with the 95th percentile value before calculating the mean. The average baseline response time for these 8 questions was 12.2 seconds. While we use this as our control for an individual’s baseline response time in the paper, we also conducted another analysis where we used the ratio baseline calculation suggested by Fazio (1990). Our findings are robust to this alternative calculation. Additionally, in the following section we employ an approach suggested by Mayerl (2013) where the 8 baseline question times are used as independent variables in an OLS model where the response time to the unemployment rate question is the dependent variable. We then take the residual from this regression to see how much the response time for the unemployment rate questions departs from what the other 8 baseline questions would predict for each individual. The results from this alternative

33

specification are presented in the following section.

Alternative Approaches for Dealing with Response Latency Time Outliers As noted in the paper, some of the response latency times for the unemployment rate question are extreme outliers. This is likely the case because some respondents tend to take a break from online surveys and then come back to them later. For those very long response times, response latencies are not likely measuring cognitive effort, but rather capturing something different. Accordingly, in the paper, we exclude individuals who took more than 54.82 seconds to answer the question. However, scholars have sometimes used two other approaches to dealing with extreme outliers on response latency times – taking the natural log of the response time or using the ranking for each response time (Ratcliff, 1993). Thus, in this section of the Appendix, we replicate the analysis from the paper using these alternative approaches. Figure A.1 includes four panels showing plots of the average response time for Democratic and Republican respondents in the pre- and post-report periods using four different approaches for dealing with these timings. The first panel in the figure is simply a replication of the results shown in Figure 5. Panel 2 replicates that same analysis, but using the natural log of page timings rather than discarding observations with extremely large values. Panel 3 also replicates the analysis from Panel 1, but this time using the rank of an individual’s response time. Notably, the patterns observed when we trimmed the response times (panel 1) are replicated consistently with the other two approaches to dealing with outliers. Specifically, Republicans consistently took longer to answer this question compared to Democrats, and the response times among Republicans increased after the jobs report was released. Finally, the last panel in Figure A.1 shows the results from the residual timings approach described in the previous section. In this graphic, response times are measured as the difference between the actual timing for the unemployment rate question and the predicted value 34

Pre-report

Post-report

Pre-report

Post-report

Ranked Times

Pre-report

Residual Times Residual Page Timing -1.5 -1 -.5 0 .5 1

Logged Times

Rank of Length 5000 5500 6000 6500 7000

Page timing (seconds) 19 20 21 22 23 24

Raw Times

Page timing (logged seconds) 2.85 2.9 2.95 3 3.05

Figure A.1: Average Time Spent on Unemployment Question Using Three Methods of Dealing With Outliers

Post-report

Pre-report

Post-report

Republican

Republican

Republican

Republican

Democrat

Democrat

Democrat

Democrat

Note: Graphic includes 12,648 respondents to the pre-election wave of the CCES Panel Study. Vertical bars represent 95% confidence intervals. from a regression model using the 8 baseline question timings as the independent variables. Once again, we see similar patterns in the list panel as we do for every other approach we used to calculate response timings and control for a respondent’s baseline response rate. This provides evidence of the robustness of our response latency analysis.

Using Coarsened Exact Matching Instead of Entropy Balancing In the body of the paper, we use entropy balancing to impose balance on the pre- and postreport reports. In this section, we show how the results from the paper replicate when we instead use coarsened exact matching to balance between the pre- and post-report groups (Iacus, King and Porro, 2012). As with the entropy balancing, we used race, gender, ideology, education, interest in politics, partisanship, age, whether the respondent is a validated voter, political knowledge, and baseline response times to match respondents. Aside from age and baseline response time, we used exact matching for each of the categories. After conducting the matching, we were left with 3,654 observations – 2,045 who answered the survey before the report and 1,617 who responded after. Table A.2 replicates the analysis from Table 2 in the paper, but using the matched respondents and the matching weights from the coarsened exact matching instead. Note that the results in Table A.2 are very similar

35

Table A.2: Treatment Effects for Democratic and Republican Respondents Using Coarsened Exact Matching Unemployment Placebo Tests Rate Guess Senate House Group Strong Democrats -0.394*** -0.018 -0.031* (N = 925) (0.070) (0.016) (0.013) -0.449** -0.041 -0.073* Weak/Leaning Democrats (N = 351) (0.141) (0.033) (0.030) Weak/Leaning Republicans 0.598** 0.010 -0.001 (N = 987) (0.192) (0.015) (0.015) 0.565*** 0.012 -0.016 Strong Republicans (N = 1,391) (0.162) (0.013) (0.013) Note: *p<.05, **p<.01, ***p<.001. to those presented in the paper. The treatment effect among Democrats was to reduce their estimates of the unemployment rate by about one-third of a point. As in the main analysis, Republicans also increased their estimates of the unemployment rate by a similar magnitude. Thus, the results using a coarsened exact matching approach are consistent with what we found using entropy balancing weights (in fact, if anything, they are stronger). Figure A.2 presents the results from reproducing the remaining analyses in the paper using coarsened exact matching. The plots at the top of the figure show that the distributions of responses to the unemployment rate question. The second row of results show the mean timings for the unemployment rate question and the placebo test questions about party control of the House and Senate. And, finally, the bottom row of results shows the relationship between the response latency times and responses to the employment rate question. In each case, the results in this figure follow the same patterns we find in the main paper when we use the entropy balancing weights. Thus, our findings are robust to this alternative approach for inducing balance between the pre- and post-report groups.

Robustness Check on Partisan Conditioning of Effects In the paper, we present evidence that Democrats and Republicans responded in different ways to the jobs report when it came to answering the unemployment rate question. To 36

Figure A.2: Analysis from Paper Replicated Using Coarsened Exact Matching

.4

Density .2 0

0

.2

Density

.4

.6

Republicans

.6

Democrats

5

10

15

20

Estimated Rate Pre-report

5

10

Pre-report

20

Post-report

15 12

13

14

Page timing (seconds)

17 16

Page timing (seconds)

Pre-report

Post-report

11

14

19

20

15

21

22

23

Page timing (seconds)

18

24

16

Apportionment Question

19

Party Control Questions

25

Unemployment Rate Question

15

Estimated Rate

Post-report

Pre-report

Post-report

Pre-report

Post-report

Republican

Republican

Republican

Democrat

Democrat

Democrat

After Jobs Report Estimate of Unemployment Rate

7

6

8

8

9

10

10

Estimate of Unemployment Rate

11

12

Before Jobs Report

0

5

10

15

20

25

30

35

40

45

Latent Response Time (Seconds) Republicans

50

55

Democrats

0

5

10

15

20

25

30

35

40

45

Latent Response Time (Seconds) Republicans

50

55

Democrats

Note: Graphic includes 3,654 respondents to the pre-election wave of the CCES Panel Study retained for analysis after Coarsened Exact Matching. Results employ weights generated from matching routine.

37

Table A.3: Testing Party Conditioned Effects Against Other Explanations Variable Coefficient Standard Error Post-report 0.722 (0.595) Democrat -1.042*** (0.105) Post-report X Democrat -0.737*** (0.125) Knowledge -0.074 (0.090) Post-report X Knowledge -0.008 (0.095) News Interest 0.025 (0.099) Post-report X News Interest -0.092 (0.114) Education -0.066 (0.047) Post-report X Education -0.060 (0.052) Intercept 10.444*** (0.548) Note: *p<.05, **p<.01, ***p<.001. N = 11,629. increase our confidence that the effect of the jobs report was conditioned by a respondent’s partisanship and not some other factor correlated with partisanship, we estimated a regression model where we tested for several alternative explanations. Specifically, our model simultaneously tests the conditional effect of the jobs report announcement by partisanship, political knowledge, interest in news and public affairs, and formal education. Table A.3 presents the results from this model where a respondent’s estimate of the unemployment rate is the dependent variable. If the partisanship explanation is correct, then the coefficient for the interaction between the partisanship dummy variable and the indicator for whether the respondent answered the survey after the jobs report should be significant even after including similar interaction terms for the other competing explanations. This is exactly what we find. In fact, not only is the interaction term statistically significant, but the magnitude of the effect is very similar to what we find in the paper. Note also that the remaining interaction terms in the model are small and not statistically significant. Thus, knowledge, interest, and education do not appear to condition the effects of the jobs report on factual responses to the unemployment question.

38