top of page
  • Writer's pictureDuncan Purves

What’s Wrong with Predictive Policing?



The European Union is leading the global race to regulate artificial intelligence research and development. In June 2023, the European Parliament is scheduled to vote on the Artificial Intelligence Act, which establishes a framework to classify the risk level of AI applications, imposing more stringent requirements on riskier systems. One of the notable features of the Act is that it outright bans some AI-driven technology, including most applications of facial recognition software, social credit scoring systems like those used in China, AI systems that could harmfully manipulate the behavior of children, and predictive policing systems, which process historical crime data to predict places and people at greatest risk of crime. As a researcher who studies the ethical implications of emerging technology, predictive policing strikes me as being quite unlike the other items on this list. As the U.S. begins to craft its own federal regulatory response to AI, likely taking EU regulation as a point of reference, it is therefore worth asking: what’s wrong with policing by algorithm?


What is predictive policing, and what can it be?


On the proposed EU regulation, an AI system is classified as “high-risk” if it creates “a high risk to the health and safety or fundamental rights of natural persons.” This risk is assessed by considering the “function performed by the AI system” as well as the “purpose and modalities for which that system is used.” For example, a facial recognition system may not raise any alarm bells considered only in terms of its “function”—identifying faces—but we might become concerned to learn that the system has been designed for the purpose of dragnet surveillance by law enforcement.


[...] place-based predictive policing [...] is a less drastic change to the status quo than other forms of predictive policing. [... It] seems to simply help police do what they were doing already: identify risky places so that police know where to allocate their limited resources.

“Predictive policing” has become a pejorative term referring to a cluster of technologies used to predict future crime. Some predictive policing systems predict future criminals, others predict future victims, and still others predict where crime will occur in the future. Distinctive ethical and legal concerns confront each of these systems, but the EU proposal does not distinguish clearly between them. I will focus on place-based predictive policing because it is a less drastic change to the status quo than other forms of predictive policing. Place-based predictive policing seems to simply help police do what they were doing already: identify risky places so that police know where to allocate their limited resources.


The Los Angeles Police Department (LAPD) pioneered the most well-known and widely criticised predictive policing application in the United States. Starting in 2011 the LAPD began an experiment with a software named PredPol. PredPol predicts where crime will take place using a proprietary computer algorithm trained on data about the type, location, and time of crime. Premised on a criminological phenomenon known as the “near repeat effect,” wherein one crime event in an area tends to give rise to a spike in similar crimes in that area, PredPol was used by the LAPD to forecast vehicular theft and theft from a vehicle at highly specific locations and times during a police officer’s patrol shift. These forecasts are displayed on a computer screen as “heat maps” of 500×500 square foot boxes indicating high-risk areas in the officer’s beat. Additional information about time, location, and type of recent crime incidents can be incorporated each day to update the algorithm’s forecasts.


[...] neither the function (identifying high risk places) nor the purpose (allocating police officers on patrol) of predictive policing is especially novel.

Insofar as systems such as PredPol predict where crime will occur, they are merely an extension of a policing movement beginning in the US in the 1980s which emphasized proactive crime prevention rather than merely responding to calls for service. Complementary to this proactive approach were the first systematic attempts by criminologists to identify places with the greatest crime vulnerability (Sherman et al 1989; Sherman and Weisburd 1995). Predictive policing takes a “big data” approach to identifying high risk places, allowing police departments to update crime forecasting systems in real time while considering hundreds of different data sources to inform predictions. These forecasts are then used to decide where to send police on patrol when they aren’t otherwise responding to a call for service. So, neither the function (identifying high risk places) nor the purpose (allocating police officers on patrol) of predictive policing is especially novel. Why, then, does it inspire such public and academic backlash? To use the language of the EU Artificial Intelligence Act, what are the risks to human health, safety, or fundamental rights posed by predictive policing?


The major risks: bias and unfairness


The most widespread criticism of place-based predictive policing is that it discriminates against people of color (Lum and Isaac 2016). This discriminatory treatment can result from a combination of two features of some predictive policing systems.


First, the data used to predict high-risk places and people can have dubious origins. Suppose that if you were to look at the history of policing in the United States, you would see a disparity in arrests between Black and White citizens. This is a realistic possibility, given that Black people are vastly overrepresented in America’s prison population. Suppose further that this disparity in arrests is partly a product of racial discrimination by police in selecting whom to investigate and arrest. If police target people differently based on race, then arrest data, particularly for drug and nuisance crimes which tend to be officer-discovered, will be influenced by this racial bias in police officers’ choices about whom to investigate. If arrests reflect racial bias on the part of police, and arrests are used to generate the forecasts of predictive policing systems, then these systems will tend to forecast greater crime than in fact takes place in Black communities. Thus, the system’s forecasts will cause residents of those communities to face excessive police attention.


Second, once police inundate an area, this can lead to more police contacts, incidents, and arrests than there were before. This data is fed back into the system and used to make future predictions, leading to a “ratchet effect” of escalating police attention (Harcourt 2006). This cycle of ever-escalating police attention caused by predictive policing is what some scholars have called a “runaway feedback loop” (Ensign et al 2018). Insofar as predictive policing systems ratchet up the effects of unjust and discriminatory policing patterns of the past, these systems are unfair to Black communities and therefore not ethically justifiable.


The concern about bias in predictive policing is founded on one empirical claim and one normative claim, and both claims can be called into question. The empirical claim is that predictive policing systems use data about arrests to predict future crimes. The normative claim is that additional police attention is unfairly burdensome for the communities that receive it.


The empirical claim is questionable, because many predictive policing systems omit arrest data for the specific reason that arrest data leads to feedback loops. Instead, some systems use data that avoids feedback loops. Data includes calls for service, which are coded by location, time, and type of crime, as well as information about features of places that make them vulnerable to crime. For example, locations with multifamily housing might face an elevated risk of car thefts, especially on cold winter mornings, because people leave their cars running to warm up before leaving for work. Data about calls for service and spatial vulnerability to crime are not perfect predictors, but they also avoid feedback loops. This is simply because adding more police in an area does not have an obvious effect on the number of calls for service coming from an area. Adding more police to an area also has no effect on the spatial vulnerabilities of a place that make it prone to crime.


Police on patrol expose innocent members of communities to risk of harm because police might stop, question, frisk, arrest, or physically assault an innocent person while on patrol. If these burdens accrue primarily to Black communities, and the benefits of crime prevention are enjoyed elsewhere, this appears to be unfair. [...] the costs of additional police patrols must outweigh the benefits of crime deterrence for Black community members.

The normative claim on which the concern about bias rests is that it is unfairly burdensome for predominantly Black communities to receive more police attention. In some respects, this attention is undoubtedly burdensome. Police on patrol expose innocent members of communities to risk of harm because police might stop, question, frisk, arrest, or physically assault an innocent person while on patrol. If these burdens accrue primarily to Black communities, and the benefits of crime prevention are enjoyed elsewhere, this appears to be unfair. The claim that putting more police patrols in a community is unfair therefore has merit, especially given the fraught relationship between Black Americans and police in the US. At the same time, there is good evidence that directed police patrols are at least somewhat effective at deterring crime. So, adding police patrols to an area also benefits people who would otherwise have been victimized. Therefore, if predictive policing is impermissible because it is unfair, the costs of additional police patrols must outweigh the benefits of crime deterrence for Black community members. In any given case, it is futile to speculate from the armchair about how these costs and benefits will weigh up. It is therefore difficult to adjudicate the claim that predictive policing is unfair to Black communities.


Giving back decisions to the communities policed


However, even if we can’t weigh up the costs and benefits from the armchair, we can think up rules for how the costs and benefits should be weighed, and who should weigh them. I have argued in print for one such rule: benefits from a policing practice like predictive policing count in favor of the practice only if the beneficiaries of the practice have not legitimately disavowed the practice (Purves 2022).


To motivate this constraint, I propose that victims of harm possess a degree of moral authority over defensive interventions aimed at benefitting them. Just as it is my right to refuse medical intervention that conflicts with my moral or religious commitments, it is my right to refuse defensive intervention intended to prevent someone else from harming me if the intervention conflicts with my moral or religious commitments (Parry 2017). If I refuse a life-saving surgery, my physician can no longer justify performing the surgery by appealing to the fact that it benefits me. I have blocked those benefits from justifying the surgery. Similarly, residents of Black communities can have legitimate reasons to disavow predictive policing. For example, it is reasonable for Black Americans to see predictive policing as being in tension with their obligation to give greater weight to the interests of friends and family in our moral deliberations than we give to the interests of strangers. Black Americans have reasons grounded in duties to friends and family to refuse policing practices that concentrate police patrols in their communities. Consider the simple fact that young Black men are by far the most likely to be the victims of police violence. Accordingly, many Black Americans who are parents and grandparents can reasonably see police attention in their communities as being in tension with their obligations, qua parents and grandparents, to protect their children and grandchildren from harm. If Black Americans disavow predictive policing on the grounds that it seriously threatens their loved ones, a police force may not justify predictive policing by appealing to the fact that it prevents their victimization.


[...] even if the crime reduction benefits of predictive policing outweighed the burdens for Black communities, predictive policing might still be unfair [...] because benefits to members of Black communities who legitimately disavow predictive policing cannot be used to justify the burdens that predictive policing imposes on innocent people.

The upshot of this constraint is that even if the crime reduction benefits of predictive policing outweighed the burdens for Black communities, predictive policing might still be unfair. It might still be unfair because benefits to members of Black communities who legitimately disavow predictive policing cannot be used to justify the burdens that predictive policing imposes on innocent people.


If citizens possess moral authority to block benefits from justifying a law enforcement practice that they disavow, what can be done to promote fairness without abandoning predictive policing wholesale? One way to address fairness is simply to reduce burdens to innocent people. If more police officers in an area means more harm to innocent people, then police departments should use predictive policing forecasts in connection with interventions that do not simply put more police in high-risk places. For example, crime forecasts can be used to address the features of places that make them vulnerable to crime, first to identify them and then changing or eliminating them. Demolishing abandoned buildings that harbor drug deals can be as effective at preventing crime as sending police patrols there, and it is less likely to lead to a violent confrontation between police and citizens. A second approach is for law enforcement to actively seek endorsement from a larger share of the potential beneficiaries of predictive policing. Community policing takes community input seriously in determining law enforcement priorities. This can take several forms, including town hall meetings, meetings between the police chief and neighborhood associations, and citizen-led police advisory councils. Through greater collaboration with community members, police departments can gain community support for new policing initiatives like predictive policing, thereby ensuring that the beneficiaries authorize their use.


Through greater collaboration with community members, police departments can gain community support for new policing initiatives like predictive policing, thereby ensuring that the beneficiaries authorize their use[: ...] the ethical landscape of predictive policing is more subtle and complex than the language of the EU’s Artificial Intelligence Act would suggest.

As the above discussion illustrates, the ethical landscape of predictive policing is more subtle and complex than the language of the EU’s Artificial Intelligence Act would suggest. I suspect this will prove to be true of many AI applications upon closer examination. As the US and EU embark on their respective journeys to AI regulation, they must therefore seek to address the ethical risks of each AI application without overlooking the societal benefits.


References


Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., and Venkatasubramanian., S (2018) ‘Runaway Feedback Loops in Predictive Policing’. Conference on Fairness, Accountability and Transparency, 81, 160-71. PMLR.

Harcourt, Bernard E. (2006) Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age. Chicago: University of Chicago Press.

Lum, Kristian, and William Isaac. (2016) ‘To Predict and Serve?’ Significance, 13, 14–19.

Parry, Jonathan. (2017) ‘Defensive Harm, Consent, and Intervention’. Philosophy & Public Affairs, 45, 356–96.

Purves, Duncan. (2022). 'Fairness in Algorithmic Policing'. Journal of the American Philosophical Association, 8, 741-761.

Sherman, L., Gartin, P., & Buerger, M. (1989). 'Hot spots of predatory crime: Routine activities and the criminology of place'. Criminology, 27, 27-56.

Sherman, L., & Weisburd, D. (1995). 'General deterrent effects of police patrol in crime hot spots: A randomized controlled trial'. Justice Quarterly, 12, 625-648.


Duncan Purves is Associate Professor of Philosophy at the University of Florida. His research on predictive policing is funded by the National Science Foundation Award #1917712: Artificial Intelligence and Predictive Policing: An Ethical Analysis


bottom of page