ANALYSIS-Chatbots in U.S. justice system raise bias, privacy concerns

The program - nicknamed JIA - is one of a number of bots being rolled out by U.S. justice systems, with advocates saying they improve access to services while critics warn automation opens the door for errors, bias, and privacy violations. "The benefit of the chatbot is you teach it once and it knows the answer," said Jack McCarthy, chief information officer of the New Jersey court system.


Reuters | Updated: 10-05-2022 17:29 IST | Created: 10-05-2022 16:42 IST
ANALYSIS-Chatbots in U.S. justice system raise bias, privacy concerns
Representative Image Image Credit: Pxhere

* U.S. Department of Justice explores chatbots * Some courts experiment with automated bots

* Civil liberties groups warn of privacy, bias risks By Avi Asher-Schapiro and David Sherfinski

When the U.S. state of New Jersey lifted a COVID-19 ban on foreclosures last year, court officials hatched a plan to handle the incoming influx of cases: train a chatbot to respond to queries. The program - nicknamed JIA - is one of a number of bots being rolled out by U.S. justice systems, with advocates saying they improve access to services while critics warn automation opens the door for errors, bias, and privacy violations.

"The benefit of the chatbot is you teach it once and it knows the answer," said Jack McCarthy, chief information officer of the New Jersey court system. "(With) a help desk or staff, you tell one person and now you've got to train every other staff member."

The trend toward such chatbots could accelerate in the near future - the U.S. Department of Justice (DOJ) last month closed a public call asking for examples of "successful implementation" of the technology in criminal justice settings. "It raises a flag that the DOJ is going to move towards funding more automation," said Ben Winters, a lawyer with the rights group the Electronic Privacy Information Center (EPIC), which submitted a cautionary comment https://epic.org/documents/epic-comments-doj-chatbot-market-survey to the DOJ.

It urged the government to study the "very limited utility of chatbots, the potential dangers of over-reliance, and collateral consequences of widespread adoption." The National Institute of Justice (NIJ), the DOJ's research arm, said it is simply gathering data in an effort to respond to developments in the criminal justice space and create "informative content" on emerging tech issues.

A 2021 NIJ report https://nij.ojp.gov/library/publications/chatbots-criminal-justice-system identified four kinds of criminal justice chatbots: those used by police, court systems, jails and prisons, and victim services. So far, most function as glorified menus that do not use artificial intelligence (AI).

But the report predicts that much more advanced chatbots, including those that measure emotions and mimic empathy, are likely to be introduced into the criminal justice system. JIA, for its part, was trained using machine learning from court documents and can handle 20,000 variants of questions and answers, from queries over wiping criminal records to child custody rules.

Its developers are trying to build more tailored services, allowing people to ask for personal information such as their court dates. But it is not involved in making any decisions or arbitration - "a thick line" that the courts system does not intend to cross, said Sivakumar Appavoo, a program manager working on AI and robotic automation.

HIGH STAKES Snorri Ogata, the chief information officer of Los Angeles courts, said his staff tried to build a JIA-style chatbot, trained using years of data from live agents handling questions about jury selection.

But the system struggled to give accurate answers and was often confused by queries, he said. So the court settled on a series of simpler menus that do not allow open-ended questions. "Injustice and in courts, the stakes are higher, and we were stressed about directing people incorrectly," he said.

Last year, the Identity Theft Resource Center - a nonprofit that helps victims of identity theft - tried to train a chatbot to respond to victims outside working hours, when staff was not available. But the system - supported by DOJ funding - was unable to provide consistently accurate information, or respond with appropriate nuance, said Mona Terry, the chief victim's officer.

In particular, it could not adapt to new identity theft schemes that cropped up during the COVID-19 pandemic, which produced new jargon and inquiries the system had not been trained for. "There's so much subtlety and emotion that goes into it - I'm not sure a chatbot could take that over," Terry said.

Emily Bender, a professor at the University of Washington who studies ethical issues in automated language models, said carefully built interfaces to help citizens interact with government documents can be empowering. But trying to build chatbots that mimic human interaction in a criminal justice context carries significant risks, she said.

"We have to keep in mind that anyone interacting with the justice system is in a vulnerable position," Bender told the Thomson Reuters Foundation. Chatbots should not be relied upon to give time-sensitive advice to those at risk, she said, while systems also need to have strong privacy protections and offer people a way to opt-out so they can avoid unwanted data collection.

The DOJ did not immediately respond to a comment request. The 2021 government chatbot report noted "numerous benefits to implementing chatbots," including efficiency and increasing access to services, while also laying out risks stemming from biased data sets, incorrect responses, and privacy implications.

'JUST DON'T BUILD THE DAMN THING' EPIC, the digital rights group, urged the government to nudge the emerging market to produce bots that are transparent over their algorithms and respect user privacy.

It has called on the DOJ to step up up-regulation in the space, from requiring bot licenses to holding regular audits and impact assessments to hold creators accountable. Albert Fox Cahn, the founder of the Surveillance Technology Oversight Project, said it is unclear why the DOJ should be encouraging automation at all.

"We don't want AI serving as gatekeepers for access to the justice system," he said. But more and more advanced tools are already being deployed elsewhere https://news.trust.org/item/20220411160005-k1a5o.

Andrew Wilkins, the co-founder of British startup Futr, said the firm has already built bots for police to handle crime reports, from domestic abuse to COVID-19 rules violations. "There was a hesitancy about 'what if it gets (the answer) wrong'," he said, but those concerns were overcome by making sure humans were closely overseeing the bots' interactions and looped in to answer escalating inquiries.

The company is rolling out analysis to try to detect the emotional tone of its chatbots' conversations, and developing services that work not only on police websites but also on WhatsApp and Facebook, he said. "It's a way to democratize access to services," he said.

But for Fox Cahn, such tools are too risky to be relied on. "For me, it's pretty simple: just don't build the damn thing," he said.

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

Give Feedback