Manufacturing, finance, and the communications industry have in the last decade all come to rely upon artificial intelligence. But there’s one industry that continues to put up resistance: the legal profession. The idea of a machine making legal decisions was long considered by opponents to be dangerous and ethically untenable. That’s about to change, says John Zeleznikow, a computer scientist at Victoria University in Melbourne, Australia. Zeleznikow believes AI is about to improve people’s access to justice and massively reduce the costs of running legal services.
Joining forces with Andrew Stranieri at the University of Ballarat, also in Victoria, Zeleznikow launched startup JustSys to develop AI-based online legal systems that don’t overstep the ethical line. Judges already use one program to assist them in the complicated and arcane process of sentencing criminals. Divorce lawyers and mediators are using another, called SplitUp, to help couples settle property disputes without resorting to the courts.
But by far the most widely used program is GetAid, which assesses applicants’ entitlement to legal aid. Historically, assessment has consumed a significant portion of Victoria Legal Aid’s operational budget. Using something like GetAid frees lawyers and paralegals from the task so they can spend their time actually representing people, says Domenico Calabro, a Victoria Legal Aid lawyer. The system is due for commercial launch in the next month or so, and according to Calabro, the Australian authorities are considering whether to roll it out nationally.
Of course, lawyers and judges have been using specialized software tools for years. But what sets these AI-based programs apart is their ability to draw inferences from past cases and predict how the courts are likely to interpret new cases.
It’s not the first time researchers have tried to develop such tools. In the 1980s, an AI-based program was developed at Imperial College London to interpret an immigration law called the British Nationality Act. Critics worried that the system, which was never implemented, could be used to entirely bypass lawyers – and the protections they afford. “Parliament produces statutes, but these are reïnterpreted by the legal profession,” says Blay Whitby, an AI expert at the University of Sussex in England. Taking lawyers out of the loop could allow a government too much control over the interpretation of laws, he says.
Zeleznikow believes his software overcomes these kinds of problems by preserving human participation and by limiting the system’s authority. GetAid, for example, cannot reject applicants; it can only approve them. All other applications are referred to human officers for reassessment. Similarly, a judge can use the Sentencing Information System to help examine trends in other judges’ decisions, but ultimately, the sentence has to come from the judge. “They are still making the decision, but their decision is far more visible,” says Stranieri. This, in turn, encourages greater consistency in sentencing, he says.
Sussex’s Whitby is cynical about the legal profession’s opposition to AI. Loss of income and an irrational suspicion of technology are at least partly responsible, he says. “They might also have to sharpen up their arguments and practices to deal with machine-advised clients,” he says. With a new generation of techno-savvy lawyers, of course, attitudes may change within the legal profession.
In fact, this already appears to be happening, says Stuart Forsyth, a consultant for the American Bar Association’s Futures Committee. Forsyth sees a willingness, at least among U.S. lawmakers, to embrace technology. The reason, he says, is likely the growing trend toward self-representation in U.S. courts. “In domestic dispute cases it’s well over 50 percent, and in others it’s as high as 80 percent,” he says. This is worrying, he says, because if people are going into court with no legal skills, they may be getting short shrift. The bottom line: artificial justice may be better than no justice at all.