Peter Haschke



Back to the Index

Political Terror Scales Bias

I keep seeing folks substituting missing values in one of the PTS-Scales (say PTS-Amnesty) with existing values from the other PTS-Scale (in this case PTS-StateDept). We ourselves (i.e. Political Terror Scales) in our data releases report average annual scores (the means of PTS-Amnesty and PTS-StateDept). I think both practices are problematic. Amnesty International and the State Department describe human rights practices from their own lens. Each organization is faced with a unique set of constraints and incentives to produce the reports they do produce.

We also know well that missingness (i.e. non-existent reports for some countries for some years) is is not random, especially for Amnesty International. Likely due to resource constraints and monitoring capacity, Amnesty International, for example, did not cover some Western European countries with arguably strong human rights records during the 1970s and 1980s (e.g.,Belgium, the Netherlands, or Denmark) and instead focused its resources on countries where violations were likely. Amnesty International also does appear to disproportionately cover autocracies, while the State Department’s reports appear to track existing democracies and autocracies more closely.

Simmons (2009) conjectures that non-governmental organizations (NGOs) such as Amnesty International have incentives to consistently report bad news even if states’ human rights records improve. If human rights records across the world improve sufficiently, Amnesty International’s ability to mobilize members and attract donations would arguably be eroded. In short, Amnesty International has an incentive to change its standards, or to focus its attention to violations ignored in the past to remain relevant.1

Although sample bias and incentives to strategically adjust reporting standards are less likely to be a problem for scores generated from the U.S. State Department’s annual reports, these reports have also been criticized. Whereas Amnesty International was arguably covering more violent countries in its annual reports, the State Department’s reports were allegedly biased in their content (see: Poe and Tate (1994) among others).

Critics frequently claim that the U.S. State Department unfairly emphasized violations in countries that are ideologically opposed to the United States (particularly during the Cold War), while ignoring similar violations in countries where the U.S. has had an interest. Poe, Carey, and Vazquez (2001) for instance provide anecdotal evidence that the State Departments reports for communist Cuba prior to 1989 suffered from “exaggeration and undocumented conclusions” whereas reports for U.S. allies such as El Salvador in the 1980s were “extremely politicized” (665). When comparing PTS-Amnesty and PTS-StateDept, Poe et al.,find that the U.S. State Department’s reports sometimes favored U.S. allies in the 1970s and early 1980s and that in particular during the Reagan administration leftist countries received disproportionately worse scores (667). By the late 1980s, however, this bias disappeared and PTS-StateDept and PTS-Amnesty converged.

In short then, both PTS-Amnesty and PTS-StateDept likely present a biased view of physical integrity rights violations – certainly in the 1970s and early to mid 1980s. Even with almost 40 years worth of data these biases remain problematic. Consider the Figure produced below. (You can also download a .pdf version here: PTS-Bias.pdf.) Replacing a missing score State Department score – say for Saudi Arabia – with the Amnesty score is not going to be in the interest of anybody. It is also very unlikely that the bias in one of the scales can easily be “fixed” with the other differently biased scale. The PTS scores should be treated less as representations of the true human rights conditions in a given country but rather representations of human rights records from the perspectives of varying monitoring organizations.

library(ggplot2)
library(ggthemes)


# latest PTS data



dat <- with(read.csv("http://peterhaschke.com/files/PTS.csv"),
  data.frame(Country, Year, Amnesty, StateDept))


# removing missing PTS scores



dat <- subset(dat, is.na(Amnesty) == FALSE & is.na(StateDept) == FALSE)


# creating dataframe to store paired t-test capture.output



out <- data.frame("Country" = rep(NA, length(unique(dat$Country))))


# t-test loop



for(i in 1:dim(out)[1]){
  temp <- subset(dat, dat$Country == unique(dat$Country)[i])
  test <- t.test(y = temp$Amnesty, x = temp$StateDept,
       alternative = c("two.sided"),
       mu = 0, paired = TRUE,
       conf.level = 0.9)
  out$Country[i] <- as.character(unique(dat$Country)[i])
  out$Estimate[i] <- as.double(test$estimate)
  out$upper[i] <- as.double(test$conf.int[2])
  out$lower[i] <- as.double(test$conf.int[1])
}


# replacing missing confidence bounds with zeros



out$upper <- ifelse(is.na(out$upper) == TRUE, 0, out$upper)
out$lower <- ifelse(is.na(out$lower) == TRUE, 0, out$lower)


# creating factors to set colors for plotting



out$color <- ifelse(out$upper == 0,
  "Identical Scores", 0)

out$color <- ifelse(out$Estimate < 0 &  out$upper < 0,
  "Amnesty Score\nsignificantly worse", out$color)

out$color <- ifelse(out$Estimate > 0 &  out$lower > 0,
  "State Department Score\nsignificantly worse", out$color)

out$color <- ifelse(out$color == 0,
  "No significant difference", out$color)


# removing South Sudan as the confidence interval is enormous given only two observations



out <- subset(out, out$upper < 2)


# renaming a few countries 



out$Country <- as.character(out$Country)

out$Country <- ifelse(out$Country == "North Korea (Democratc People's Republic of Korea)",
  "North Korea (People's Republic)", out$Country)

out$Country <- ifelse(out$Country == "Israel, occupied territories only",
  "Israel, Occupied Territories only", out$Country)

out$Country <- ifelse(out$Country == "Vietnam, Socialist Republic of",
  "Vietnam", out$Country)


# sorting the dataframe by the difference estimates



out <- out[order(out$Estimate),]
out$Country <- factor(out$Country, as.character(out$Country))


# ggplot



ggplot(out) + theme_solarized() +
  geom_abline(intercept = 0, slope = 0, color = "red") +
  geom_pointrange(aes(x = Country, y = Estimate, ymin = lower, ymax = upper, color = color),
    position = "identity", alpha = 1) +
  scale_y_continuous(breaks = seq(-2, 2, 0.25)) +
  labs(x = "", 
    y = "\nMean Differences (State Department - Amnesty)
    \n 90% Confidence Interval (Paired t-Tests)") +
  scale_color_manual("", values = c("#268bd2", "#859900", "black", "#d33682")) +
  coord_flip() +  
  theme(axis.text = element_text(size = 12),
        axis.title = element_text(size = 14, face = "bold"),
        legend.text = element_text(size = 12))

center


  1. However, Hill, Moore, and Mukherjee (2013), find that Amnesty International generally adheres to its reporting standards, even when faced with incentives to exaggerate allegations of abuse. 

This post is filed under category R, This post is filed under category Human Rights, and contains the following tags: R, ggplot2, plots, Human Rights, Repression, Political Terror Scale, PTS.

Back to the Blog-Index