Content area
Full Text
* Corresponding author. E-mail: [email protected]
1 Introduction
Over the last few years, legal scholars, policy-makers, activists and others have generated a vast and rapidly expanding literature concerning the ethical ramifications of using artificial intelligence, machine learning, big data and predictive software in criminal justice contexts. Although there are significant differences between them, in general, what is common to those technologies is that they recommend treatments – police interventions, bail determinations, sentences – on the basis of mathematically structured processing, typically over extensive datasets. They thus stand in comparison to more traditional means of making decisions in legal settings, which have traditionally relied upon the judgment of line officials, guided by a wide variety of legal rules and standards, but nevertheless largely intuitive in character. The objective of this paper is to respond to a variety of ethical concerns that have been raised in the legal literature surrounding these emerging technologies, specifically in the context of criminal justice.
These concerns can be clustered under the headings of fairness, accountability and transparency. First, can we trust technology to be fair, especially given that the data on which the technology is based are biased in various ways? Second, whom can we blame if the technology goes wrong, as it inevitably will on occasion? Unlike judges, juries, loan officers, social workers, etc., arguing with an algorithm might seem about as fruitful as arguing with your toaster. Finally, does it matter if we do not know how the algorithm works or, relatedly, cannot understand how it reached its decision?
In this paper, I sketch an account of how stage-wise modes of decision can be compared in terms of the unfairness/accuracy trade-off, argue that much of the concern about individualised tailoring is exaggerated and consider the degree to which traditional conceptions of due process are in tension with algorithmic decision-making. I conclude with a brief discussion of the now highly salient, but rapidly evolving, discussion of ‘intelligibility’ in this context. For the most part, I conclude that these are serious, but resolvable, concerns. More importantly, the very same concerns arise with regard to existing modes of decision-making in criminal justice. Existing systems of criminal justice already have serious problems with fairness, accountability and transparency. The question, hence, is comparative: can algorithmic...