REUTERS |

Summoning the demon: robot arbitrators: arbitration and artificial intelligence

Who received an Alexa for Christmas? It seems that more and more of us are comfortable with, perhaps even reliant upon, artificial intelligence in our day to day lives. But how comfortable would we be if robots could be appointed as arbitrators? Although this would involve a significant cultural shift, we are perhaps closer to it than we realise.

Artificial intelligence (AI) is an imprecise term but implies the automation of tasks or functions (including decision making) that would usually be associated with human intelligence. It’s clear that AI tools have already taken hold in some areas of dispute resolution, usually in a bid to increase efficiencies. For example, disclosure software is no longer limited to the performance of keyword searches, but is able to use predictive coding and natural language processing to extract and convey actual meaning from documents. AI tools can also be used to identify and analyse authorities, and to analyse submissions. The brutal truth is that automated tools can deal more quickly, efficiently and accurately than any human with large quantities of documents or data.

Outside the case management context, AI has taken off as a tool for predicting outcomes. In 2016, researchers at UCL, the University of Pennsylvania and University of Sheffield developed AI software which analysed the language used in submissions and previous judgments to predict the outcome of European Convention on Human Rights (ECHR) litigation in 79% of cases. Unsurprisingly, the legal services market has swiftly embraced AI, with several predictive “solutions” now well established in the litigation context. Confidentiality in arbitration means that such tools (which depend upon the input of data from previous cases) are obviously more difficult to develop and less common in the arbitration context, but there is at least one (Dispute Resolution Data) which has partnered with arbitration institutions to produce an arbitration-focused equivalent. Other products allow parties to screen and assess the likely performance of their legal team and counsel. Just as computers are now better than doctors at screening for and predicting skin cancers, it seems that our robotic friends may be able to provide a more accurate assessment of the likely outcome of a case than a mere human.

The success of AI in terms of predicting outcomes naturally leads one to wonder whether computers could also carry out judicial decision-making. This is already happening in the US; in the infamous Loomis case, a judge used AI to assess an appropriate sentence (though, as was subsequently held in the Supreme Court, the judge used the robot as a check, rather than actually delegating the decision to it). AI is also routinely used in bail hearings in the US to predict the risk of absconding. Could a sophisticated robot decide an arbitration? Should we welcome such a development?

There is actually quite a lot to be said for an artificial arbitrator. Appointments of robots would be less vulnerable to challenge on grounds of conflict of interest or bias. Presumably, also, their decision-making process would be less likely to be tainted by the very human weaknesses of bias, illogicality or just having a bad day. And there is obvious potential for reducing the time and costs of hearings. Some commentators have highlighted the possibility of combining AI with blockchain, so that disputes under “smart” contracts can be determined by inbuilt software, which would then credit or debit the parties as appropriate: an entirely closed and self-executing process. That has got to be cheaper and more efficient than the vast majority of arbitral proceedings.

But I wonder if parties are ready for this. Although AI is developing fast, it cannot, as yet, reflect the nuanced judgment of facts that might (and perhaps should) influence a decision. Maybe that is a good thing, as “nuanced judgment” shades at some stage into “illogical human bias”, but it may not always be easy for parties to reconcile themselves to such decisions. More generally, it is difficult to see at present how robots could determine disputes that turned on factual judgment, for example, which of two witnesses to believe. A failure to give sufficient weight to the factual details of a case might constitute a breach of the right to be heard.

This raises a further significant drawback of automated decision-making. The reasons for the decision are not transparent. Developers of software are reluctant to disclose the algorithms and coding that has resulted in the output, so (unlike a reasoned arbitration award) the reasons for a decision will not be articulated.

Decision by robot must also be inherently conservative, with the associated risk of perpetuating trends and stifling development. Because outputs are based on analysis of existing data, the ability to change or develop the law in response to changes in human thinking is stifled. Existing biases and assumptions are replicated and perpetuated. In short, it takes a human to think outside the box.

Quite apart from these practical and policy objections, though, the possibility of decision by robot raises some difficult philosophical issues. Our arbitration law assumes that decisions are made by a human exercising a “judicial” function. At some level, it seems, we assume that there is an inherent value in being heard by a fellow human, who is subject to duties of fairness and respect. Can we ever accept or reconcile ourselves to the automated application of undisclosed algorithms as a substitute for this, however great the efficiency savings may be? Should we accept that, if parties wish to adopt this form of dispute resolution, that is a valid choice and amend our arbitration legislation to reflect this? There is no clear answer. When Elon Musk described AI as “summoning the demon” he had in mind weightier matters than arbitration: but nevertheless, there are serious issues for the arbitration community to grapple with in the coming years.

Share this post on: