How fair are government decisions based on algorithmic predictions? And to what extent can the government delegate decisions to machines without sacrificing procedural fairness? Using a set of vignettes in the context of predictive policing, school admissions, and refugee matching, we explore how different degrees of human-machine interaction affect fairness perceptions and procedural preferences. We implement four treatments varying the extent of responsibility delegation to the machine and the degree of human involvement in the decision-making process, ranging from full human discretion, machine-based predictions with high human involvement, machine-based predictions with low human involvement, and fully machine-based decisions. We find that machine-based predictions with high human involvement yield the highest and fully machine-based decisions the lowest fairness score. These differences can partly be explained through different accuracy assessments. Fairness scores follow a similar pattern across contexts, with a negative level effect and lower fairness perceptions of human decisions in the predictive policing context. Our results shed light on the behavioural foundations of several legal human-in-the-loop rules.
Initiatives
Fair Governance with Humans and Machines

More Events