Rise of the machines: Manes and colleagues look at “ethical AI”

Manes.

Published September 24, 2018 This content is archived.

Assistant Clinical Professor Jonathan Manes spearheads an ambitious new project that will look at the ethical and social implications of the use of artificial intelligence, or AI.

Print
“We want to work in both directions – to build concerns about ethics, fairness and accountability into the tools as they’re developed, and to think about ways to regulate the tools after they’re built. ”
Jonathan Manes, Assistant Clinical Professor
School of Law

Hard decisions about criminal justice are increasingly being turned over to “smart machines” – computer algorithms that analyze vast amounts of data and decide, for example, where to deploy more police patrols.

For non-techies, that raises some issues. Just how do these programs make their decisions? What safeguards are in place to prevent racial and economic bias in gathering and analyzing the information? Who’s watching out for fairness?

Enter an ambitious new project spearheaded by Assistant Clinical Professor Jonathan Manes, who directs the School of Law’s Civil Liberties and Transparency Clinic, and five colleagues in UB’s Computer Science, Industrial Engineering, Architecture, and Media Study departments.

The group has been awarded $25,000 in seed funding for a year-long series of projects looking at the ethical and social implications of this increasing use of artificial intelligence, or AI. The grant is part of the University’s Germination Space program, which aims to foster interdisciplinary research on AI issues.

There’s a widespread idea that the computer is objective, but we’re increasingly aware that the design and implementation of these tools involve questions of judgment and values,” Manes says. “This grant is meant to bring together people who are building AI tools – the computer scientists and engineers – and people who are thinking about how they affect society.

“We want to work in both directions – to build concerns about ethics, fairness and accountability into the tools as they’re developed, and to think about ways to regulate the tools after they’re built. I’m learning from my colleagues in computer science and other technology disciplines about how these systems work and how the law can respond in a way that preserves fairness and accountability.”

The researchers will look specifically at ways machine learning is used in making decisions in the criminal justice system. Manes cites a variety of current uses for this technology: recommending criminal sentences and bail decisions; determining how to allocate police resources geographically, sometimes called “predictive policing”; and processing video to identify suspects and suspicious behaviors.

Manes and his colleagues Varun Chandola (who is serving as principal investigator on the grant), Michael Bolton, Kenn Joseph, Atri Rudra, and Mark Shepard plan to identify one or two of these technologies and, he says, “tackle them from several different approaches.” He expects the research to result in academic articles, an expansion of his current scholarly output. His recent projects include law review articles on the mismatch between novel police technology and existing legal tools meant to ensure transparency in public policy decisions; one about the concern over criminals “gaming” law enforcement technologies; another focusing on how laws protecting trade secrets impede access to the information that’s necessary to understand and evaluate predictive algorithms.

One concern with using AI in the criminal justice system, he says, is racial and economic bias that can creep into both the data and its interpretation. For example, the data on crime rates might suggest that police patrols should be stepped up in poor neighborhoods; the result could be more arrests in those neighborhoods, producing a feedback loop that disproportionately targets racial minorities or poor people.

And in terms of regulation, Manes says, the law, has some catching up to do. For example, existing Freedom of Information Law protections aren’t much help when they result in the release of documentation or source code that don’t allow researchers to audit how AI tools work in practice.

The collaboration with colleagues across campus may create new legal practice opportunities for student attorneys in Manes’ Civil Liberties and Transparency Clinic. Funds from the grant will support legal efforts to obtain data and information about criminal justice algorithms. This will build on the Clinic’s growing docket of cases at the intersection of law, technology, and criminal justice, which include, for example, the clinic’s recent policy report on the Buffalo Police’s body camera program, and a significant new Freedom of Information Act case with the American Civil Liberties Union and Privacy International that seeks information about the use of computer hacking software by government agents.

True to the School of Law’s commitment to the broad sharing and cross-pollination of ideas, the researchers will organize a speaker series with visits by six experts in “ethical AI” over the academic year, and a major workshop at year-end with an invited speaker and presentations by UB researchers. The funding will also support student research assistantships, including one for a law student to work on legal and policy projects.

Long term, with the support of outside funding, the UB researchers hope to establish an interdisciplinary Center for Ethical AI to continue study on these emerging issues.