Steven Spielberg’s “Minority Report” pictured a future where police arrested potential murders before they actually committed the crime based on the predictions made by three humans with a skill to previsualize the future.

The future shown in the movie isn’t that far from the reality as you might think. The New York Times has published a story how a judge used a recommendation made by an algorithm to make a decision on the prison term for a defendant.

To make a long story short, back in 2013, in Wisconsin, a judge sentenced a guy to a 6-year prison. When making a decision on the prison term, the judge used an algorithm called Compas. Based on the analysis of the previous conduct, behavioral patterns, and other factors, Compas calculates the likelihood of someone committing another crime and suggests what kind of supervision a defendant should receive in prison.

The defendant has challenged the judge’s reliance on the Compas score to the Wisconsin Supreme Court. The appeal focuses on the criteria used by the Compas algorithm, which is proprietary and therefore, nobody actually knows what criteria were used to make the decision.

The defendant’s lawyer argues that his client should be able to review the algorithm and make arguments about its validity as part of his defense. The developer of Compas, a private software company, obviously declines to open the algorithm saying that it’s proprietary.

The truth is that what needs to be reviewed here is not just the algorithm. The algorithm itself might be totally fine, and the way it calculates the score might make a perfect sense. What really matters is how representative the training data set was and what factors (along with gender and previous criminal activities) were taken into consideration when scoring a probability of committing another crime.

Suppose, for example, that the data set used for the training was racially profiled. For example, it might include profiles of people of only certain races. After being learned on this biased data set, the algorithm would give worse scores to people of these races. This is one of the reasons why the algorithm used in Pennsylvania (developed by a public agency and available to the public for analysis, by the way) excludes race.

Of course, I’m oversimplifying it to a certain extent. But what worries me is the reliance on an algorithm developed by a private company that refuses to disclose the algorithm and the way it was trained. Without knowing whether or not it was biased, how can the one use the decisions made by the algorithm to hand-down a prison term? The article in The New York Times quotes Ezekiel Edwards, the director of the Criminal Law Reform Project at the American Civil Liberties Union, who said that data from the criminal justice system is often unreliable so using it in training data sets for these algorithms can call into question the results that these algorithms generate.

Usually, I’m not a big fan of government regulation and I see a quite bright future for AI, but perhaps, Elon Musk wasn’t that wrong when saying that AI needs to be regulated before it’s too late. At the very least, the way AI works at law enforcement agencies has to be transparent. Don’t expect that AI will necessarily punish only bad guys, while good guys, like yourself, have nothing to worry about. If it’s not transparent and you don’t really know how it differentiates between good guys and bad guys, how can you be sure that you yourself will be always a good guy from the AI perspective?

 

Leave a Reply

Your email address will not be published. Required fields are marked *