Blair Read and Paige Bollen (Massachusetts Institute of Technology)
Abstract: Public opinion data can contain a wealth of information about how citizens evaluate and participate in politics. Yet, often respondents refuse to answer survey questions, or simply respond “don’t know” when asked about their opinion. When respondents present non-attitudes, they are typically dropped, reduced to “missing” data. Non-attitudes, however, are non-random; particular types of people withhold information based on their own political resources and identity, the nature of the question, and their interaction with the enumerator (Berinsky 2004). While a survey sample might be random and representative, when those exhibiting non-attitudes are dropped, the results may not be, leading to false conclusions and omitted perspectives in survey research. Overlooking these non-attitudes can thus have grave consequences. This problem, though explored in US-based survey research, has received less attention by users of face-to-face survey data from the Global South. We examine the prevalence and patterns of non-attitudes in Afrobarometer, a large cross-national public opinion survey on the African content. Using the number of times respondents answer “don’t know” to questions across a survey, we propose a typology of different types of respondents who exhibit non-attitudes to survey enumerators. We focus on two important sources of variation in non-attitude prevalence – the types of people who provide non-attitudes and the types of questions for which they do so – and adopt a variety of strategies to characterize respondent-level and question-level variation. In particular, we use pre-trained word embeddings to cluster questions based on similarity of the questions’ words, modeling whether respondents are more likely to provide non-attitudes on some topics but not others. Hierarchical models then allow us to analyze the relationship between respondent characteristics and the different types of questions to which they exhibit non-attitudes, nested in varying political contexts, to understand the political determinants – and therefore implications – of non-attitudes. In a context where researchers are concerned both with the lack of information citizens have about politics and the data quality of these surveys themselves, we investigate when, where, and why different types of non-attitudes are most likely to exist. We suggest that the conclusions we can draw from public opinion data are moderated by contextual features that might make one type of non-attitude respondent more likely than the other.