photo of Jonathan Manes.

Keeping up with artificial intelligence

“New technology raises new problems,” says Clinical Assistant Professor Jonathan Manes, “and it needs to be regulated in new ways.”

Nowhere is that more apparent than in the ways companies and governments use artificial intelligence – the manipulation of vast amounts of data to do everything from targeting the ads on your Facebook feed to predicting which criminal defendants are likely to reoffend. It’s a hot growth industry in business, computer science, and governance – one that legal scholars like Manes are working to stay ahead of.

“When people talk about AI,” he says, “mostly they’re talking about machine learning algorithms, systems that ingest a large quantity of data and find patterns in that data. I’ve been most focused on how AI systems may change the way government works and how public services are rendered.”

photo of Manes' article.

Read Professor Manes’ article “Secrecy and Evasion in Police Surveillance Technology,” forthcoming in the Berkley Technology Law Journal.

Manes sitting and talking with law students.

For example, he cites a recent case in which a government agency used a computer algorithm to determine the amount of coverage Medicaid recipients were entitled to for home health aides, and whose claims would be denied – but didn’t tell the clients how it came to those decisions, many of which involved significant reductions in benefits. “It raises questions of fairness and due process,” Manes says. “Those are the sorts of issues that are going to be cropping up all over the place as we deploy these systems more widely.”

An attendant challenge is the complexity of these algorithms and how they’re used, raising the question of what information a critic actually needs in order to make informed judgments about their use. “It’s an interesting and hard problem, how we translate those old transparency laws and oversight systems to this new technology,” Manes says. “FOIA gives you the right of access to records the government has, and records are important but they may not be enough. The source code for an AI system, for example, may not tell you the whole story without access to the training data. FOIA doesn’t give you a right to sit at the computer system that the government uses and figure out how it works. So there can be a mismatch.”

It’s not just lawyers who are grappling with such issues as how human biases can get codified into the ways computers make our decisions. That’s why Manes meets regularly with UB colleagues in fields as diverse as computer science, industrial engineering, architecture and media study, as part of an ongoing effort to look at the implications of the increasing use of AI.

An initial seed grant for this Ethical AI Working Group funded a speaker series and has spawned research studying machine learning in the criminal justice system and public sector. Now that collaboration has paid off in a new grant from the Mozilla Foundation – the non-profit organization that makes the popular Firefox web browser – to develop materials on ethics and fairness that will become part of UB’s standard computer science curriculum.

The $150,000 grant will fund curriculum for each of the four years of undergraduate computer science students’ education. “The idea is to create teaching modules that focus on the ethical decisions and consequences embedded in the design of computer systems, particularly AI systems,” Manes says. “I’m likely to be contributing ideas and problems involving legal constraints (like anti-discrimination and privacy laws) that track some of the ethical concerns computer scientists need to be thinking about.”

The resulting curriculum will be made available to other universities through an open-access license.  

“The common approach is, ‘We just build the thing and it’s not our job to see how it’s used.’ The focus is on the technical challenge,” Manes says. “What my colleagues in computer science want to do is teach their students that the way they build systems cannot help but embed value judgments and has real consequences. So we’ll be developing modules throughout the curriculum that bring that to life.”