Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

IUPUI Study Addresses Racial Bias Concerns In Predictive Policing

WFIU/WTIU News

Police departments are increasingly using algorithms to predict crime, and a study from Indiana University-Purdue University Indianapolis aims to address concerns that these algorithms are racially biased.

It’s a method called predictive policing. A computer algorithm looks at data, including crime rates, and predicts where crime is likely to happen. Police departments then increase patrols in those areas.

But some groups criticize the technology, saying it reinforces racial bias in policing.

“These ‘predictive policing’ tools are used primarily to further concentrate enforcement activities in communities that are already over-policed, rather than to meet human needs,” the American Civil Liberties Union wrote in a joint statement with 16 other organizations in 2016.  

IUPUI Computer and Information Science Professor George Mohler worked with the Los Angeles Police Department to test that theory.

They used both an algorithm and a human analyst to predict burglaries and theft of motor vehicles in certain areas, then compared the arrest rates by ethnic group. The research covers six months. Mohler says the difference wasn’t statistically significant.

But he says the concerns are still valid because the algorithm is only as fair as the data it’s based on.

“The study doesn’t necessarily try to answer the question of whether the data itself is biased, because that is a very hard question to answer, [and] it’s partly subjective,” he says. “If you were to feed the wrong type of data into one of these algorithms there is a potential for bias, so hopefully these things are monitored as they are deployed in different cities.” 

Mohler hopes his study can be a framework for police departments to check for racial biases in their predictive policing.

It’s not clear how many police departments are using predictive methods, but Mohler says the Indianapolis Metropolitan Police Department is using at least some form of crime prediction.

Looking ahead, Mohler is conducting additional research to look at how departments can remove racial bias from their algorithms once it’s identified.

And he says it’s not just a problem in policing.

“Throughout our lives algorithms are starting to play a role,” he says. “So there is a subfield of machine learning research that looks at fairness of machine learning algorithms.”

For example, racial bias in facial recognition software, resulting from too few minorities included in the initial data set.  

The predictive policing study is published in the journal Statistics and Public Policy. Additional authors are P. Jeffrey Brantingham of UCLA and Matthew Valasik of Louisiana State University.