photo of a neon sign that reads "Data has a better idea".

Photo by Franki Chamaki on Unsplash

4 Legal Questions Raised by the Use of AI in Criminal Justice

Published December 14, 2018 This content is archived.

Hard decisions about criminal justice are increasingly being turned over to “smart machines” – computer algorithms that analyze vast amounts of data to, for example:

Recommend criminal sentences and bail decisions
Determine how to allocate police resources geographically, sometimes called “predictive policing”
Process video to identify suspects and suspicious behaviors

However, replacing human judgment with computer-generated data in the justice system is largely unexplored legal territory. Lawyers across specialties are raising questions about the morality and trustworthiness of this data when it dictates someone’s future.

1. Reliance on computer data over human judgment and values

“There’s a widespread idea that the computer is objective, but we’re increasingly aware that the design and implementation of these tools involve questions of judgment and values,” Manes says.

While AI programs are getting smarter and smarter, they still have problems understanding the subjective contexts of legal situations. Relying too heavily on data without context may, by human standards, intrude on an individual’s right to a fair trial.

2. Racial & economic bias

One concern with using AI in the criminal justice system, he says, is racial and economic bias that can creep into both the data and its interpretation.

For example, the data on crime rates might suggest that police patrols should be stepped up in poor neighborhoods; the result could be more arrests in those neighborhoods, producing a prejudicial feedback loop that disproportionately targets racial minorities or poor people.

3. Lack of regulation and accountability

In terms of regulation, the law has some catching up to do.

There’s a mismatch between novel police technology and existing legal tools meant to ensure transparency in public policy decisions. For example, current laws protecting trade secrets impede access to the information that’s necessary to understand and evaluate predictive algorithms.

Also, existing Freedom of Information Law protections aren’t much help when they result in the release of documentation or source code that don’t allow researchers to audit how AI tools work in practice.

So, if the tools are created and owned by private entities, it’s much harder to test and make changes to improve their efficacy. This may lead to, among other things, procedural questions regarding the accuracy of information used in the legal process.

4. Misuse of data (intentional or otherwise)

There are concerns from all sides about the misuse of these AI programs. Criminals may find ways to “game” law enforcement technologies, and judicial parties may find ways to use data to influence legal outcomes.

There are already studies in the works to test program vulnerability. A significant new Freedom of Information Act case with the American Civil Liberties Union and Privacy International seeks information about the use of computer hacking software by government agents.

Overall, this raises questions about whether these AI programs provide a net positive or negative impact on the function of our judicial system as a whole.

Photo of Amber.

Guest blogger Ashley Wilson-Rew is Content Strategist & SEM at protocol 80, Inc.

CONTACT US

Office of Admissions
University at Buffalo School of Law
408 O'Brian Hall, Buffalo, NY 14260
716-645-2907
law-admissions@buffalo.edu