Monday, September 25, 2017

Racist AI?

“AI can detect signs of Alzheimer”, “AI can see who is gay”, “AI predicts next recession”. We are getting used to seeing claims that "an AI" can do all sorts of things. And it becomes quite credible, given the amazing stuff we can now download to our phones.

One of the more controversial applications of "AI" is a system apparently already in use in the United States for estimating the likelihood that convicted criminals will reoffend in the future. The whole concept has been criticized for being racist, and because the private enterprise that provides the algorithm doesn't disclose it.

But the implicit assumption, even among many critics, seems to be that if you let a computer do something, it will automatically be fair. Everyone is treated the same. You take away all emotions, prestige, bias, and prejudice, and base the decision on facts and logic only.

A claim that can be found here and in several other articles is that the algorithm has been shown to consistently be much more likely to false-flag black people as high risk of recidivism, and conversely more often mistakenly classify white people as low risk.

An article in Pro Publica then goes on to say that:

"If computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be fairer and more selective about who is incarcerated and for how long. The trick, of course, is to make sure the computer gets it right."

No it isn't.

As far as I can see there is no evidence that there is anything wrong with the algorithm. The problem is that the idea of determining time in jail based on statistical predictions of risk of reoffending is incompatible with individual justice, and enforces structural racism.

Let's first straighten one thing out: Even though the AI probably doesn't have direct information about ethnicity or skin color, this correlates with a number of factors like education, employment, living standards and so on. Therefore even if we don't tell the computer who is black and who is white, its output may still correlate strongly with it. So we might as well assume that the AI (indirectly, but still) has information about people's ethnicity and how it correlates with crime.

But even if black people are convicted more often on average than white people, shouldn't the mistakes of the algorithm go equally often in both directions? Isn't the apparent bias an indication that something is wrong with the algorithm?

No. This is the problem. And it has to do with a simple mathematical fact that won't go away no matter how you tinker with the algorithm.

Suppose instead, just to take a less controversial and more clear-cut example, that you would try to predict the outcomes of games in some sports event like a soccer league. You can see from statistics that team A seems to be the strongest, winning more than 50% of its games no matter the opposition, and conversely that team Z loses a majority of their games even against the weaker of the other teams.

Then your best bet isn’t going to be some scenario where team A wins 70% of the games, or whatever their statistics is. Instead you maximize your expected number of correct predictions by guessing that team A will win every single game, and similarly that team Z will be losing all the time.

This guess, which is the best one given the information you have, will appear unfair if you look at the cases where it was incorrect. It will happen much more often that team Z is incorrectly predicted to lose, and that team A is incorrectly predicted to win, than the other way around. Because in this example the other way around never happens, as team A is always predicted to win, and team Z always predicted to lose.

The same thing will probably happen (to some extent, but less clear-cut) if we try to guess whether people will commit crimes based on for instance their level of education and where they live (things which incidentally also correlate with ethnicity).

And there is no way of making this go away by fixing a bug in the algorithm, because it doesn't have anything to do with the computer or the code. If you try to predict criminality based on a factor X that correlates with it, then people who have X will more often be labeled high-risk, and also more often be false-flagged as high-risk.

Letting such predictions influence penalties is simply incompatible with individual justice.

There is much more to be said about this, for instance regarding the fact that there is a strong political movement in Sweden and in many other countries that would like to have precisely this form of structural discrimination and that actually prefers that to individual justice, as long as they get to choose which factors to incorporate (like being an immigrant) and which factors to ignore (like membership in the Nordic Resistance Movement).






No comments:

Post a Comment