The use of artificial intelligence and algorithms to predict citizens’ crime, not having adequate scientific literature at the base, is controversial to many AI experts. For this, a coalition of researchers has asked academia to stop publishing studies in favour of this technology.
This coalition called the Coalition for Critical Technology has called into question colleagues doing research to convince them to conclude such studies forever. In fact, the area of crime and the algorithms that try to predict it is often subject to biases that end up treating various social groups differently based on skin colour or other prejudices.
As written by the coalition in one letter on Medium signed by 1700 experts, “there is no way to develop a system capable of predicting crime without this mechanism being subject to bias, precisely because the notion of crime is naturally subject to prejudice. Research in this area cannot be neutral. “
This letter was written following the announcement by the Springer publishing house, the world’s largest publisher of academic books, to also publish studies on this sector. Among these, the coalition’s interest is A Deep Neural Network Model to Predict Criminality Using Image Processing, where several researchers claim they can create an algorithm for facial recognition able to predict whether someone is a criminal or not with accuracy up to 80% and without any bias.
Following the request of the Coalition for Critical Technology, Springer would have decided to stop publishing this research, which once subjected to peer review would have been unreliable. The coalition’s choice to act during the Black Lives Matter protests, which also involved companies like Google, is probably random. The media coverage, however, thanks to these facts could be greater.