Data-Driven Justice?

Before you demand an “evidence-based” or “data-driven” justice system, make sure you know who is actually doing the driving.

Jeffrey Butts, July 2021 (rev)

Someone has to pay for the evaluation evidence used to fashion community safety strategies, and evaluation research can be quite expensive. Some statistical work may be conducted by analysts operating on their own, but good evaluation research is labor intensive, often requiring paid staff, original data collection, and external funding. When we review the findings of evaluation research, we are essentially seeing answers to whatever questions researchers were paid to investigate.

What sort of questions tend to be ignored by evaluation research? How does this affect public policy and our understanding of the best ways to ensure community safety and community well-being?

Evaluation evidence comes from investments made by policymakers, government agencies, and foundations. Their investments are not free of bias. They reflect the goals, beliefs, and values of funding bodies as shaped by cultural, class-based, and racial biases. And, of course, the biases of economic and cultural elites tend to favor non-structural explanations of crime.

In other words, foundation officers and elected officials instinctively prefer to locate the origins of crime in individual pathology rather than inequality, injustice, and community disinvestment. As a result, justice research more often than not measures the effects of interventions on individual behavior instead of social structure and community context.

Here’s the key question: Are wealthy neighborhoods relatively free of violence because so many inherently non-violent people decided to live there, or do the structural and economic advantages of wealthier neighborhoods themselves lead to lower rates of violence? Reasonable answers to this question would suggest a range of public safety policies, but research often concentrates on just one — individual interventions to address anti-social behaviors. 

Evaluation researchers are also rational creatures. Their main goal is to publish – a lot. They prefer to evaluate interventions that can be tested quickly in order to turn around publications quickly, and it is less time-consuming to study policies focused on individuals, especially when it is possible to use pre-existing administrative data without spending the time required to collect new data.

Community-level interventions and primary prevention programs are time consuming, hard to control, and more likely to produce ambiguous findings. Sample sizes are inherently smaller, which means statistical rigor is more elusive. Testing the effect of interventions at the neighborhood or community level may lead to fewer publications and less exciting results. So, like their funding partners, evaluation researchers often prefer to address public safety issues at the level of individual behaviors, testing therapeutic interventions and law enforcement strategies rather than strategies designed to improve community safety overall.

Just remember, when someone tells you “what research says” about effective ways to reduce crime and violence, they’re describing a research base that was created by people and organizations with opinions, values, and self-interests.

Research findings don’t appear like wildflowers in a meadow. They are planted and watered by gardeners with intention.