Ishita Gopal (Pennsylvania State University)
Abstract: In this paper I advance a theory to explain the timing of internet censorship in authoritarian regimes. Censorship as a means of digital repression has been on a rise across countries in the recent past, questioning the ‘liberation technology’ perception of the internet. This has resulted in rapidly declining internet freedom, making it important to understand the motivations behind censorship decisions. Concentrating on the timing of censorship, I argue that governments are more likely to censor online platforms when they are functioning to create widespread negative attention narrowly focused on regime itself, than when such attention is diffuse. Broad discussions of the problems faced by a country, such as—economic collapse, food shortages, violence—is not likely to result in blocking of the platform. But when these discussions become linked to government failures, the probability of censorship increases. This is because clarity of responsibility makes it increasingly difficult for the regime to deflect blame for bad outcomes and the cost of allowing criticisms to remain in public view increases. On the other hand, if the target of blame is broad and negative information is discussed without explicit references to the government, the short-term costs of allowing such information to remain in the citizens view is small and the long-term benefits of collecting information on dissenters is high. I test this theory using the Venezuelan Twitter and show that narrow targeting of the government by the accounts of the political opposition increases the probability of Twitter being censored while broad targeting does not. My independent variables comprise a daily measure of targeting which are created using a three-step process. I first use Latent Dirichlet Allocation (LDA) with asymmetric priors to identify topics which represent different facets of anti-regime messaging. I then use four main labels to aggregate the generated topics. These capture discussions of (a) protests; (b) political conditions in the country; (c) life threatening situations faced by the citizens (for example, starvation, state backed violence) and (d) socio-economic problems faced by the citizens (for example, inflation, deteriorating infrastructure). The last two labels allow me to distinguish between topics which are of immediate concern and have high short-term negative impact, from those whose impact is negative but does not have immediate threatening consequences. Finally, the labels are filtered to indicate if the government is explicitly mentioned in the topics or not. My dependent variable is a daily indicator of whether Twitter was censored or not. I subsequently employ a logistic regression model to test how broad and narrow targeting of the government effects censorship patterns. I find that an increase in narrow targeting of the government on the subject of protests and political conditions in the country increases the probability of censorship. Whereas, a rise in broad targeting on the subject of life-threatening conditions faced by citizens reduces the probability of censorship. These effects are statistically significant.