Logo

Logo

A robot in a judge’s chair

The study of the impact of technology on 702 occupations found lawyers and judges to be more or less at the midpoint of jobs likely to be replaced by technology. Experts perceive that whilst developments in ‘Judge AI’ or ‘Judicial AI’ are in their infancy, there are indicators that it will become more relevant.

A robot in a judge’s chair

representational image (iStock photo)

While discussing the uses of artificial intelligence (AI) in automated writing, one of my former students has recently argued that it holds immense promise for the legal arena and can make our extremely slow judiciary prepare orders/judgments automatically, sifting through voluminous submissions by litigants. It immediately reminded me of a 2005 series of Doraemon comics, in which Nobita was sent to jail due to his bad behaviour by a robot judge. Well,
should there be a ‘human touch’ in judgments? My study, however, began.

A robot judge may sound like real-life sci-fi. The exponential growth of computing technology, the global madness around big data analytics, machine learning, and tonnes of data continuously obtained by the internet of things covering almost everything around us are transforming our world at an unprecedented speed. I was looking at a 2013 study by CB Frey and M Osbourne of the University of Oxford, which focused on the future of employment, particularly how susceptible jobs are to computerisation.

The study of the impact of technology on 702 occupations found lawyers and judges to be more or less at the midpoint of jobs likely to be replaced by technology. Experts perceive that whilst developments in ‘Judge AI’ or ‘Judicial AI’ are in their infancy, there are indicators that it will become more relevant. The UK-based AI-driven legal services chatbot ‘DoNotPay’, for example, was established in 2015 as a ‘robot lawyer’, which was supported by IBM’s Watson computer.

Advertisement

As per a 2016 report of ‘The Guardian’, the chatbot had contested more than 250,000 parking tickets in London and New York and won 160,000 of them, all free of charge. Exactly in similar fashion, Xiaofa, a robot, stands in Beijing No 1 Intermediate People’s Court, offering legal guidance and assisting general society with getting hold of legal terminology. While ‘AI assistants’ can support judges in their decision-making process by predicting and preparing judicial decisions, ‘robot judges’ can replace human judges and decide cases autonomously in fully automated court proceedings. Well, can robots become successful judges?

Celebrated author and speaker on AI Terence Mauri, of course, thinks the machines will recognize physical and psychological signs of deceitfulness with 99.9 per cent precision. He anticipated that machines would be common in civil and criminal hearings in England and Wales in 50 years. AI-enabled robot judges are already in action in China since 2017, to hear specific cases such as trade disputes, e-commerce liability claims, and copyright infringements. Millions of such cases have been handled by a robot judge so far.

Quite often, it may not be robots physically sitting in judges’ chairs though – the AI device analyses the uploaded information and determines a verdict based on law and facts. In America, Los Angeles was working on a Jury chatbot project. Some other courts in the US are adopting online dispute resolution (ODR) initiatives to handle a range of conflicts. And no discussion on technology can possibly be complete without a reference to Estonia, the most advanced digital society in the world.

Well, the Estonian Ministry of Justice has urged its chief data officer, Velsberg Ott, to design an AI-enabled ‘robot judge’ to adjudicate small claim disputes of less than 7,000, which might help manage paperwork, decision-making and make judicial services much more efficient. Here also the two parties will upload documents and other relevant information, and the AI will issue a decision that can be appealed to a human judge.

Certainly, there are several advantages as well as risks to having AI in the courtroom. The role of a judge is a complex one. And the boundary between technology and humans needs to be judiciously set. In a 2018 research article titled ‘Do Judges Need to Be Human? The Implications of Technology for Responsive Judging’, Tania Sourdin of the University of Newcastle, Australia, and Richard Cornes of the University of Essex wrote: “The role of the human judge though is not merely that of a data processor. To reduce judging to such a definition would be to reject not only the humanity of the judge, but also that of all those who come before them.”

Again, a 2019 research article published in the journal ‘Legal Studies’ takes a sceptical look at the possibility of advanced computer technology replacing judges. The paper sounds a note of caution about the capacity of algorithmic approaches to fully penetrate this socio-legal milieu and reproduce the activity of judging, properly understood.

In December 2018, Justice Surya Kant, then Chief Justice of the Himachal Pradesh High Court and presently a Supreme Court judge, expressed his concern: “If e-technology would be allowed to overpower the judicial field without any ‘Lakshman Rekha’, are we marching towards a stage of engaging robots in place of judicial officers?”

And, in a 2018 interview, the US Supreme Court Chief Justice John Roberts was asked whether he could foresee a day “when smart machines, driven with artificial intelligences, will assist with courtroom fact finding or, more controversially even, judicial decision making”. Justice Roberts responded: “It’s a day that’s here and it’s putting a significant strain on how the judiciary goes about doing things”. Well, was Eric Loomis’ case at the back of the Justice Roberts’ mind?

In 2013, Eric Loomis was found driving a car that had been used in a shooting. In 2016, the Wisconsin Supreme Court convicted Loomis and sentenced him to six years’ imprisonment, at least in part on the recommendation of a private company’s secret proprietary software called COMPAS, which works using an algorithm that considers some of the answers to a 137-item questionnaire.

Loomis, however, submitted a petition for certiorari on the ground that his constitutional right to due process was violated as neither he nor his representatives were able to scrutinize or challenge the accuracy and scientific validity of the risk assessment of the algorithm behind the recommendation.

It also alleged that the system in question violates due process rights by taking gender and race into account. The US Supreme Court, however, denied the writ of certiorari, thus declining to hear the case, in June 2017. Well, the case certainly generated some important concerns, and it continues to be referenced in any important discussion on an AI-driven courtroom. One needs to assert what sort of cognitive biases are involved when an AI suggests what a judge should do. Some recent research by Joanna Bryson, a computer science professor at the University of Bath, suggests that such biases are of course possible.

Utilising a predictive algorithm for the cases involving human sentencing and verdicts is not equivalent to recommending which film one should watch next, for example. Thus, both the possibilities and dangers of AI-driven judgments are limitless!

In Steven Spielberg’s 2002 movie ‘Minority Report’, set in Washington DC in 2054, the ‘PreCrime’ police force is capable of predicting future murders using data mining and predictive analyses. However, when an officer in the system is accused of one such future crime, he sets out to prove his innocence! Does this Spielberg film depict the future of AI-driven courtroom? And is the future dystopian or Utopian?

(The writer is Professor of Statistics, Indian Statistical Institute, Kolkata)

Advertisement