Published December 14, 2018
“We want to work in both directions – to build concerns about ethics, fairness and accountability into the tools as they’re developed, and to think about ways to regulate the tools after they’re built. I’m learning from my colleagues in computer science and other technology disciplines about how these systems work and how the law can respond in a way that preserves fairness and accountability.” - Jonathan Manes, assistant clinical professor of law and director of UB Law’s Civil Liberties and Transparency Clinic
Hard decisions about criminal justice are increasingly being turned over to “smart machines” – computer algorithms that analyze vast amounts of data to, for example:
|Recommend criminal sentences and bail decisions|
|Determine how to allocate police resources geographically, sometimes called “predictive policing”|
|Process video to identify suspects and suspicious behaviors|
However, replacing human judgment with computer-generated data in the justice system is largely unexplored legal territory. Lawyers across specialties are raising questions about the morality and trustworthiness of this data when it dictates someone’s future.
“There’s a widespread idea that the computer is objective, but we’re increasingly aware that the design and implementation of these tools involve questions of judgment and values,” Manes says.
While AI programs are getting smarter and smarter, they still have problems understanding the subjective contexts of legal situations. Relying too heavily on data without context may, by human standards, intrude on an individual’s right to a fair trial.
One concern with using AI in the criminal justice system, he says, is racial and economic bias that can creep into both the data and its interpretation.
For example, the data on crime rates might suggest that police patrols should be stepped up in poor neighborhoods; the result could be more arrests in those neighborhoods, producing a prejudicial feedback loop that disproportionately targets racial minorities or poor people.
In terms of regulation, the law has some catching up to do.
There’s a mismatch between novel police technology and existing legal tools meant to ensure transparency in public policy decisions. For example, current laws protecting trade secrets impede access to the information that’s necessary to understand and evaluate predictive algorithms.
Also, existing Freedom of Information Law protections aren’t much help when they result in the release of documentation or source code that don’t allow researchers to audit how AI tools work in practice.
So, if the tools are created and owned by private entities, it’s much harder to test and make changes to improve their efficacy. This may lead to, among other things, procedural questions regarding the accuracy of information used in the legal process.
There are concerns from all sides about the misuse of these AI programs. Criminals may find ways to “game” law enforcement technologies, and judicial parties may find ways to use data to influence legal outcomes.
There are already studies in the works to test program vulnerability. A significant new Freedom of Information Act case with the American Civil Liberties Union and Privacy International seeks information about the use of computer hacking software by government agents.
Overall, this raises questions about whether these AI programs provide a net positive or negative impact on the function of our judicial system as a whole.