Automated Justice / Predictive Policing

(Continuation of “Nudged Citizen”) In our cybernetic regimes, justice is being automated and policing is increasingly predictive. In the United States, computer programs now help decide whether you’ll get parole or pretrial release, how much your bail [i] and how long your prison sentence will be [ii]. While doing so, the inner workings of their algorithms remain largely obscure. Described by their programmers as the trade secret necessary for their business model, this opacity de facto gives structural power to machines: their ‘decisions’ cannot be controlled, nor can the bias that may be programmed into them. Thus, algorithms become reified into fetishes that have a substantial impact on lives, and that, though human made, seem to work all on their own as objective processes. With this goes a practical reinterpretation of law, away from judges or juries pursuing dynamic and case specific interpretations of the spirit of the law to programmers coding a reading of the law into algorithms that thus form a new static practice of the letter of the law. This applied reinterpretation is preceded and made possible by an increasing reduction of legal matters from questions of justice to administrative processes. It are these so-declared administrative processes that are subsequently automated under the pretense of economization and increased efficiency, often accompanied by propaganda claiming that algorithms are more equal because they treat everyone the same. But that this is not the case has been proven repeatedly.

Consider parole algorithms, for example. In principle, they seem to be little more than weaponized social science: a person fills out a specially designed form and an algorithm then crunches the numbers to offer an assessment of the current situation of the test-taker as well as a projection of her/his future behavior. Though official terminology describes this process as “evidence based” and its result as “recommendations” [iii], its basic philosophical component is much more speculative than that. Like some components of the Chinese Social Credit System, U.S.Automated justice is often intimately tied to behaviorist conditioning as well as predictive policing: using algorithms to either make you do something or prevent you from doing something. In practice, this may mean that you take a “recidivism test” and if The Machine thinks you are likely to break the law if you get out of custody, then The Machines “recommendation” is that you should not get out.

Assessments through algorithms require a lot of information. Privacy does not exist. The questionnaire of a risk-assessment software called COMPAS, for example, contains more than 130 questions for men and more than 160 questions for women. Those questioned must open up about their whole life: childhood, education, family relations, neighborhood, acquaintances, experiences of violence, professional career, use of drugs or alcohol, prior criminal convictions and so forth. At the end, the algorithm produces “risk scores” for potential acts of violence and risk of recidivism–on a scale from 1 to 10.[iv]

The bias inherent in such programs has been observed both on a factual and theoretical level. ProPublica has published a study of COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) risk projections in one Florida county, finding them significantly biased against people of color. As the study reports, people of color would often be given high risk scores for minor offences, while white people would often be scored low in spite of significant prior convictions. Frequently, these risk assessments would turn out to be inaccurate, with people of color that had obtained a high-risk score not having committed a crime, while significant numbers of white people with low scores did. Generally, ProPublica observed:

The score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so. [v]

A question of interest in considering the racial bias of algorithms is that of its nature. Why does the COMPAS algorithm studied by ProPublica produce racially biased results?

Could this disparity be explained by defendants’ prior crimes or the type of crimes they were arrested for? No. We ran a statistical test that isolated the effect of race from criminal history and recidivism, as well as from defendants’ age and gender. Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind.[vi]

How is race coded into automated justice, then? [vii] Is it through secondary markers, such as address, schools visited, etc.? Due to the opacity of programs like COMPAS, only speculative answers can be offered. Which makes it all the more interesting that programs that helped decrease incarceration also appeared as significantly more transparent:

With transparency and accountability, algorithms in the criminal justice system do have potential for good. For example, New Jersey used a risk assessment program known as the Public Safety Assessment to reform its bail system this year, leading to a 16 percent decrease in its pre-trial jail population. The same algorithm helped Lucas County, Ohio double the number of pre-trial releases without bail, and cut pre-trial crime in half. But that program’s functioning was detailed in a published report, allowing those with subject-matter expertise to confirm that morally troubling (and constitutionally impermissible) variables — such as race, gender and variables that could proxy the two (for example, ZIP code) — were not being considered. [viii]

It are results such as these that made organizations like the ACLU early advocates of algorithmically assisted justice[ix]. But the use of algorithms within the U.S. justice system and its racial bias extend beyond numbers-crunching and includes, among other things, facial recognition software with a tendency to overidentify people of color as criminals:

“If you’re black, you’re more likely to be subjected to this technology and the technology is more likely to be wrong,” House oversight committee ranking member Elijah Cummings said in a congressional hearing on law enforcement’s use of facial recognition software in March 2017. “That’s a hell of a combination.”

Cummings was referring to studies such as Garvie’s. This report found that black individuals, as with so many aspects of the justice system, were the most likely to be scrutinized by facial recognition software in cases. It also suggested that software was most likely to be incorrect when used on black individuals – a finding corroborated by the FBI’s own research.[x]

Transparency, in other words, is not the sole problem nor the sole solution to automated justice (with the transition from algorithmically assisted to automated justice a fluid one). Even if the software programs used were fully accessible, even if the recommendations given by the programs would not be mandatory components in bail or parole decisions (as in fact they often are not), even if the algorithms were not biased they would still perform a radical transformation and dehumanization of justice away from a social process to an economic one revolving around increasingly totalitarian numbers and margins instead of people’s lives.

In smart worlds and State Cybernetic Regimes, it is easy to imagine the data extracted from the social environment to feed predictive policing patterns without the need of questionnaires and well outside the frame of criminal investigations. At this point, this is the major difference between the Chinese Social Credit System and automated justice. While both rely on algorithms and data, the data used by programs such as COMPAS is not yet smart data, nor big data, and it does not have the power to influence people’s behavior a priori. While such nudging–as a prior post argued–is already a fact of life, it remains as yet outside the field of action of the justice system per se. Social Credit, nudging, automated justice for predictive policing: all three are data- and feed-back driven forms of control designed to predict and control future behavior and all three target individuals rather than classes of people. All three structuralize norms of justice through algorithms, extracting them not only from the realm of the human, but also from situational navigations and negotiations of norms, rules of behaviors and ways and means of being human. In all three cases, these structures thrive on being opaque, often to the degree of being invisible. Even if the Social Credit System comes with a catalogue of expected “good” behavior, as seem to do parole algorithms (set on law and “good conduct”), the simple fact that these behavioralist systems are automated and work on an enormous amount of data analyses and executed by intransparent machine logics move them beyond the realm of complete or even sufficient understanding in terms of people’s everyday interaction with them. This creates an atmosphere that pits blind obedience to the unfathomable requirements of The Machine or System against human interactions and community.

Other examples of algorithmically assisted ‘justice’ exist, and it is no surprise that they, too, tend (at this time) to target minorities and groups who are not recognized or treated as full citizens, viz. that they target groups that are often situated as outside and non-stake-holders in The System (for example refugees, former prisoners, people with diabetes or STDs, etc.). As in the cases considered above, the ostensible objectivity of the software reduces questions of justice to purely administrative processes and serves to externalize responsibility as well as bias into algorithms without admitting as much. A point in case would be the following example from Germany:

Bavarian authorities hope to speed up the checking process [for asylum applications] with technology. They analyse metadata on smartphones and run speech samples through a “voice geometry” programme to determine the travel route and ethnic background of applicants who do not have passports.[xi]

Algorithms, then, have a firm hand in matters of justice and both present and predictive social control already. This is not exclusive to countries considered authoritarian, but exists under different forms throughout the world. In this process, minorities and other ‘aliens’ serve as guinea pigs to introduce, test and normalize new cybernetic regimes that then slowly extend to all citizens, with the smartification of cities and global mega events being but two of many factors driving this process forward.

[i]https://www.nytimes.com/2015/06/27/us/turning-the-granting-of-bail-into-a-science.html?_r=0

[ii]https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing

[iii]https://www.deutschlandfunk.de/algorithmen-im-us-justizsystem-schicksalsmaschinen.1247.de.html?dram:article_id=385478

[iv]Idem.

[v]https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[vi]Idem.

[vii]Obviously, issues of race bias in software extend beyond the justice system, with famous examples being bias in Google search results (Frank Pasquale. The Black Box Society.Cambridge: Harvard UP, p.40), racist twitter bots ( https://www.nytimes.com/2016/03/25/technology/microsoft-created-a-twitter-bot-to-learn-from-users-it-quickly-became-a-racist-jerk.html ), or image analysis software categorizing black people as gorillas (https://bits.blogs.nytimes.com/2015/07/01/google-photos-mistakenly-labels-black-people-gorillas/).

[viii]https://www.nytimes.com/2017/10/26/opinion/algorithm-compas-sentencing-bias.html

[ix]https://www.zeit.de/gesellschaft/zeitgeschehen/2016-06/algorithmen-rassismus-straftaeter-usa-justiz-aclu

[x]https://www.theguardian.com/technology/2017/dec/04/racist-facial-recognition-white-coders-black-people-police

[xi]https://www.theguardian.com/world/2018/may/21/germany-to-roll-out-mass-holding-centres-for-asylum-seekers

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s