Artigos

Why the use of technology in arbitrators’ selection process – although fostered – must still be handled carefully

Daniel Becker & Ricardo Dalmaso Marques

The use of advanced technology in arbitration proceedings has become a source of concern for many practitioners of the arbitral community. Thus, right from the beginning, it is worth highlighting that the widespread misconception that human beings will soon be replaced by robot lawyer look-alike is still far from being feasible or even desirable in many ways. Actually, even though technology has been mercilessly disrupting every industry, lawyers historically tend to worry disproportionally about the impacts of these changes to their practice. And despite the broadness of the mash-up between technology and the law, herein we intend to specifically address the benefits and perils of using artificial intelligence in the process of arbitrators’ selection.

For starters, arbitration is a legal practice in which good part of the tasks involved are bespoke. Therefore, except with respect to document review and forensics, automation cannot impact it to the same extent it does to other legal activities, such as case law research and simple documents assembly. Yet, a lot has been said recently about the use of technology in the arbitrator-appointment process, which has been frequently mentioned in many fora, with particular emphasis on the case many are making to expand transparency in international and domestic arbitration,[1] including:.

  • Catherine Roger’s project “Arbitrator Intelligence”, which aims to create a database of arbitrators, followed by a reputation evaluation system generated by feedbacks from the participants. This intends to help diminish the information gap during arbitrator’s’ appointment process and selection, as well as the lack of diversity in arbitration,[2] as it will help provide and ensure well-deserved recognition to high-quality arbitrators who may not be being taken under consideration.[3]
  • The “Global Arbitration Review Arbitrator Research Tool” (GAR ART), which provides insights regarding arbitrators’ procedural preferences and practices.[4]

And going further, practitioners who are also technology enthusiasts seem eager to apply artificial intelligence – in particular learning algorithms from computational techniques, such as machine and deep learning – to promote automated selection or appointment of arbitrators. This is of significant relevance considering that arbitrators are still currently selected as they were in the 20th Century: through basic web navigation, arbitral institutions lists, business cards, phone calls, and disclosures seldom made to the extent and depth they should.

Therefore, several positive aspects regarding the use of AI in this process may be identified; gathering information on a potential arbitrator – including the ones disclosed by the arbitral institutions in their attempt to increase transparency in the arbitration system – and analyzing it, in particular, would be much easier and less time consuming. It should also help guarantee – or at least increase the chances – that the arbitrator is impartial and independent, and that she or he has good reputation on the criteria that are relevant for the parties (including its availability!), which is something the duty of disclosure may never be able to fully accomplish.[5]

Above all, with human beings getting enabled by AI to promote better educated choices, it would be possible to diverge from the classic loop of arbitrators’ choice, reducing issues of conflict of interest and improving diversity in the field. This means parties would have access to a wider pool of potential arbitrators being actually considered for the position, based on metrics and meticulous criteria, rather than mere personal beliefs – an issue that has been a struggle raised by several initiatives, such as “Equal Representation in Arbitration Pledge” and “ArbitralWomen”.

Thus, technology may assist the parties on arbitrators’ selection by allowing them to (finally) obtain reliable data regarding their patterns, social and professional networks, as well as previous and current appointments and performances. If properly used, although it may not entirely solve the diversity issue for now, it could become an efficient weapon in the fight for a stronger and more legitimate process of selection and evaluation of arbitrators. At the end of the day, the selection of arbitrators will remain a matter of choice by the parties, the arbitral institution and other arbitrators, as the case may be; nevertheless, such decision may be progressively based on reliable data relating to aspects and qualities of the arbitrators that once were unnoticed, unreachable or at least were lacking proper appreciation and reflection in this crucial moment of the arbitral proceedings (and even during its course).

Nonetheless, however good of a premise this may be, we must not adhere blindly to a general faith regarding the solidity and correctness of the so-called analytics or data science.[6] This is because, by being informed by datasets, artificial intelligence will necessarily reflect the society that produced such data, and may inherit both social and particular (from those who fed it) bias. On data science, we are subject to encode, explicitly and intentionally or not, our prejudices and biases.

A worrying – however revealing – example is the use of algorithms by US and Canada Criminal Justice Systems. Several states employ software to evaluate possibility of defendants to reoffend.[7] Still, the nonprofit association ProPublica carried out an investigation called “Machine Bias”, sustaining that the algorithms used to predict recidivism of defendants were biased against African Americans, tainting the reliability of the outcome of the prediction.[8] Such algorithm is alleged to have made serious mistakes with black and white defendants at roughly the same rate – meanwhile, inversely proportional.[9]

Now, imagine using such algorithms for appointing arbitrators. The definition of a suitable or good arbitrator is not objective.[10] Artificial intelligence may consider a well-ranked arbitrator as one who has a good availability, renders awards fast et cetera. Therefore, as an example, the algorithm may understand that it makes more sense to appoint men over pregnant women on temporary leave. Hence, this (unforgivable and mistaken) bias could lower the score and rank of potential female arbitrators[11] – and let’s never forget that today’s most influential arbitrator is a woman (Gabrielle Kaufmann-Kohler)![12]

Thus, we sustain that technology can definitely provide more objective criteria for selecting possible prospects for the position, improving the selection process of the arbitral tribunal’s members; however, it should be used cautiously for its data input depends on human activity, which, therefore, may “teach” the algorithm to act in a biased manner. Although proper data may significantly improve such process, automated decisions are far from being “biased-free”, which in turn reveals a need for the arbitration community to address it carefully and mindfully of its possible backlash (biased appointments). This is why data, such as gender, age, ethnicity etc., must not be included (or should at least be handled carefully) in these datasets – at least for now – in order to prevent unwanted outcomes from automated processes.[13]

A century ago, criminal anthropologists used to state that a typical criminal would have certain facial and social patterns. Technology may resurrect similar understandings in some cases. Recently, a study carried out by Stanford University indicated that it could tell whether a person was homosexual simply by scanning its facial biometric data on social networks. The research helped shed a light on a very important issue regarding artificial intelligence: it does not matter if it is right, it matters whether we believe it or not[14].

 

 

 

 

[1] ROGERS, Catherine A. Transparency in International Commercial Arbitration, 54 U. Kan. L. Rev. 1301 (2006).

[2] DALMASO MARQUES, Ricardo. To Diversify or Not to Diversify. Report on the Session ‘Who Are the Arbitrators’. Albert Jan van den Berg. (Org.). ICCA Congress Series 18 – Legitimacy: Myths, Realities, Challenges. 1ed.The Hague: Kluwer Law International, 2015, v. 18, p. 579-588.

[3] DALMASO MARQUES, Ricardo. O Dever de Revelação do Árbitro. Almedina, São Paulo, 2018.

[4] PAISLEY, Kathleen; SUSSMAN, Edna. Artificial Intelligence Challenges and Opportunities for International Arbitration. New York Dispute Resolution Lawyer Volume 11 No. 1 (Spring 2018).

[5] Ibid.

[6] O’NEIL, Cathy. The era of blind faith in big data must end. TED Ideas. Available at: https://www.ted.com/talks/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_end/transcript – Access on April 7, 2019.

[7] EPIC. Algorithms in the Criminal Justice System. Electronic Privacy Information Center. Available at:  https://epic.org/algorithmic-transparency/crim-justice/ – Access on April 7th,  2019.

[8] ANGWIN, Julia et al. How we analyzed the COMPAS recidivism algorithm. Pro Publica. Available at: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm – Access on April 7th, 2019.

[9] BECKER, Daniel; FERRARI, Isabela. Start spreading the news: NYC regulará seus algoritmos. JOTA. Available at: https://www.jota.info/opiniao-e-analise/artigos/start-spreading-the-news-nyc-regulara-seus-algoritmos-08012018 – Access on April 7th,  2019.

[10] BAROCAS, Solon; SELBST, Andrew D. Big data’s disparate impact. California Law Review, 671 (2016).

[11] FERRARI, Isabela; BECKER, Daniel; WOLKART, Erik Navarro. Arbitrium ex machina: panorama, riscos e a necessidade de regulação das decisões informadas por algoritmos. Revista dos Tribunais, vol. 995, September, 2018.

[12] Global Arbitration Review, Who is the most influential arbitrator in the world?. January 16, 2016. Available at: https://globalarbitrationreview.com/article/1035051/who-is-the-most-influential-arbitrator-in-the-world. Access on April 7, 2019.

[13] HUSTON, Matthew. Even artificial intelligence can acquire biases against race and gender. Science Magazine. Available at: http://www.sciencemag.org/news/2017/04/even-artificial-intelligence-can-acquire-biases-against-race-and-gender – Access on April 7, 2019.

[14] VINCENT, James. The invention of AI ‘gaydar’ could be the start of something much worse. Science Magazine. Available at: https://www.theverge.com/2017/9/21/16332760/ai-sexuality-gaydar-photo-physiognomy – Access on April 7, 2019.

Deixe um Comentário

Categorias

Arquivos