Article by Lucy Ryder

The answer is absolutely not. Human justice should be determined by human beings. As advanced as artificial means such as the mind-boggling Artificial Intelligence of ‘Smartsettle One’ are, they do not have the capacity to administer justice. Justice is more than a set of binary protocols that can be ‘processed’: it is the basis of human civilisation and humanity.

So…a short essay! And yet…Maybe a closer look at the current ‘human’ system might subvert that first sentence.

The use of AI to determine legal disputes would involve the sacrifice of ‘the positive aspects of humanity’ (Rasnačs, 2018) such as empathy and emotion, in exchange for unassailable logic. Dzintars Rasnačs began the 2018 CEPEJ conference with a reference to the use of AI in recent years. For example, in the 1970s, software was created to play chess, a game with its origins in the 6th Century. Rasnačs acknowledges that, over time, this software has developed the capacity to beat the majority of human players, jesting with the room about the number of people who could win above the intermediate level. Chess is game of pure logic and strategy, and so AI software can boast this mastery as the process of learning involves observing patterns and rules in an isolated context.

Although the principles of logic used in chess are necessary in Law, the human complexity underpinning it cannot be understood without the emotion derived from the experience of living. In his essay, ‘The Place of Logic in the Law’, Cohen (1916) comments ‘you cannot construct a building merely out of the rules of architecture’. Whilst Aristotle’s logical deductive reasoning can be replicated by AI, this only forms the skeleton of what is essential when determining a legal dispute. The ‘construction materials’ are an arsenal of contextual understanding and empathy; concepts somewhat inherent to humanity, yet foreign to AI. The use of AI to settle minor monetary disputes has been undeniably successful. However, extending this to more serious forms of Alternative Dispute Resolution or to criminal or human rights proceedings would eliminate a strong element of justice within the legal process, with Jack Freeman (Arbitration Partner at Allen & Overy) commenting: ‘the mediation process is, inherently, a human one’ (Beioley, 2019).

It must be acknowledged that the use of AI has the potential to improve the efficiency of dispute resolution. In November 2017, a project at Cambridge University entitled ‘Case Cruncher Alpha’ predicted the result of 775 ‘financial ombudsman cases’ with 86.6% accuracy, whereas a panel of seasoned lawyers achieved 66.3% (Rogers and South, 2018) . Additionally, in 2005, Ashley and Gordon successfully used AI software to complete the judgement process involving parole hearings for prisoners (Contissa, 2018). When considering such applications on a mass scale of cases, the financial and professional relief would be significant. However, the cost of applying such endeavours on a global scale, regrettably, outweighs the benefits. The replacement of humanity in determining legal disputes would create a cold and inflexible form of justice, which would not be within the borders of ethical practice.

The first principle of the Ethical Charter is the ‘respect for fundamental rights and freedoms’ (CEPEJ, 2018, p. 7). A ‘fundamental right’ set out in the European Convention on Human Rights is Article 6, the right to a fair trial. In establishing this right, the Convention states that everyone is entitled to a hearing before ‘an independent and impartial tribunal established by law’ (Council of Europe, 2010, p. 6). Principally, it cannot be just to present before an applicant or respondent a tribunal which is devoid of the human empathy or emotion by which we define independent thought. Whilst Article 6 would arguably be enhanced by the efficiency of artificial proceedings, the pure logic utilised could not effectively compare to the compounded essence of human reasoning. To introduce such a change would be a rejection of the humanity inextricably linked with the Law, which could disadvantage the innocent across a variety of contexts.

This also connects to the fourth principle of the Ethical Charter; ‘the principle of transparency, impartiality and fairness’ (CEPEJ, 2018, p. 7). As the complexity of independent human thought cannot be replicated by AI, it cannot be as ‘fair’ as the use of a human judge with the ability to consider the context outside of logical reasoning. Brian Doyle, President of Employment Tribunals in England and Wales comments that ‘the best judges bring empathy’ and ‘understand how a person is reacting in a process that is alien to most’ (Law in Action Podcast, 2020). Therefore, in order to determine a legal dispute fairly and ethically, the human method is essential.

That isn’t to say that the human method is infallible. The process of determining legal disputes can often be tainted by prejudice and personal opinion. For example, in ‘Clash of Norms: Judicial Leniency on Defendant Birthdays’, Chen and Arnaud (2020) suggest that a judge’s sentencing may be impacted by the date of the respondent’s birthday. Despite this, completely replacing imperfect human contribution with artificial means is not the answer. It is instead possible to find a balance that does not dismiss the ‘positive aspects of humanity’ (Rasnačs, 2018) from the judiciary.

Rasnačs ended his speech with the comment: ‘I would like to see AI as the youngest brother and not the oldest’ (Rasnačs, 2018). Rather than replacing the humanity within the legal system, AI can act as an assistant, meaning that the experience of the living is not yet subordinate to pure logic. Daniel Chen (2019) suggests that AI could be used to predict judges’ decisions and ‘nudge’ them to improve the fairness of their sentencing, a marriage of sorts between AI and human reasoning (Angela Chen, 2019). This is what we need; maintaining the humanity and slight irrationality of the human method, whilst utilising the logic of AI to improve the accuracy of dispute resolution.

It is also the faults that the ‘negative aspects of humanity’ (Rasnačs, 2018) have introduced into legal data that limit the use of AI in the judiciary. From Harper Lee’s ‘To Kill a Mockingbird’ to the Stephen Lawrence case, it is evident that the legal system is deeply flawed, an argument often reflected by the crime statistics. At the CEPEJ conference, Giuseppe Contissa explained that AI functions by observing patterns within ‘Big Data’ sets (Contissa, 2018). The ‘Big Data’ for the representation of Black and Ethnic Minority (BME) groups in the English prison system demonstrates concerning disproportionality. For example, 51% of the men in young offender institutions identify as from a BME background (Grierson, 2019). This overrepresentation is often viewed as a consequence of institutional racism and targeted policing, as Isaac and Lum (2016) suggest that ‘drug crime is everywhere, but the police only find it where they’re looking’ (Howarth, 2018).

The prejudice of the Criminal Justice System is embedded in the crime statistics in the form of wrongful convictions and overrepresentation that cannot be extracted. The second principle of the Ethical Charter is ‘the principle of non-discrimination’ (CEPEJ, 2018, p. 7). In order to make the use of AI feasible, crime statistics that do not reflect the biases within policing and the judiciary would have to be presented, a practical impossibility at this moment in time. Perhaps the use of AI could work, if it were not for humanity’s discriminative indiscretions throughout history. In reality, AI would express an unintentional bias too strong to fit accordingly with ethical standards.

This issue is worsened by the dark figure of crime; ‘a mass of unknown and unrecorded offences’ (Oxford Dictionary of Sociology, 2015). The ‘dark figure of crime’ often includes more sensitive incidents such as domestic abuse. Professor Enrique Gracia (2004) deems this the ‘iceberg’ of domestic violence. If AI was used to determine all legal disputes, the data with which it would be fed would be inaccurate; overwhelmed by biased statistics and missing the figures that are arguably the most necessary. In the name of improving the ‘essential’ logic of the Law, a blind and inherently biased tribunal would be created, which would never have the opportunity to distance itself from the ‘Big Data’ filled with inaccuracies. Whereas a human judge has the ability to understand wider context, even if sometimes they elect not to use it, AI is entirely dependent on general statistics. Can the pain of an individual domestic abuse victim truly be conveyed through such data?

Of course not. Without understanding the emotion and wider context of such cases, courts cannot be ‘the great levellers’ that Atticus Finch describes them as (Lee, 2010 Edition). Moreover, the concept of introducing AI into the mediation of legal disputes suggests that society has forgotten the importance of human interaction. In the legal sector, such a mistake cannot be afforded without a greater consequence for those whom it seeks to protect. It must be understood that the legal system is only as advanced as its weakest, most discriminative areas, meaning that the rectification of human error should be prioritised before introducing to the judgement process another element blighted by discrimination.

Additionally, when considering the practical implementation of AI as a method of determining legal disputes, there are several issues. The final principle of the Ethical Charter is ‘the principle of being under user control’ (CEPEJ, 2018, p. 7). This doesn’t require as much explanation as its sibling principles, as it simply suggests that humanity is not prepared to sacrifice its control over AI in the judiciary. It expresses that humans may have an inherent distrust for AI, as it allows for several occasions on which the artificial judgement process could be subject to human intervention. By stating ‘professionals in the justice system, should, at any moment be able to review the judicial decisions’ (CEPEJ, 2018, p. 12), the Charter asserts that despite the success of Smartsettle One, humanity is not yet ready to remove its influence in determining legal disputes.

This is also evident outside of the legal profession, with 68% of a sample cohort stating that they would trust a human more than AI to administer a bank loan (PEGA, 2019, p. 4). The fact that human legal professionals would still be necessary to monitor the decisions made by AI eliminates its potential to improve efficiency, and subsequently removes its purpose. Therefore, not only would the use of AI to determine legal disputes be unethical in theory, in practice it could only function within a perfect and orderly society, which ours most certainly is not. Also, an institution as essential as the legal system must be under the control of a trusted and respected entity, which is AI, evidently, is not.

It is clear that the use of AI to determine legal disputes would invite more unethical practice into the judiciary and would subvert human justice. At the CEPEJ conference, Stéphane Leyenberger stated that ‘it is no longer a question of whether we want or don’t want AI’ (Leyenberger, 2018). This is entirely true; technology is advancing rapidly. However, we do have a choice in how it is used. The preservation of humanity within the legal system must not be overlooked. It is not yet time for the complete destruction of century’s worth of human effort and compassion within the Law, or of the high standards which those within the legal profession hold themselves to. So no, of course not. Artificial means such as AI should not be used to determine legal disputes in the place of the human method and all that has created it.


Bibliography

Barnett, J. and Treleaven, P. (2018) ‘Algorithmic Dispute Resolution – The Automation of Professional Dispute Resolution Using AI and Blockchain Technologies’. <em>The Computer Journal</em>, 61 (3), pp. 399-408 [Online]. Available at: <a href="https://www.oxfordreference.com/view/10.1093/acref/9780199683581.001.0001/acref-9780199683581-e-2530">https://doi.org/10.1093/comjnl/bxx103</a> (Accessed: 12<sup>th</sup> March 2020)

Beioley, K. (2019) <em>Robots and AI threaten to mediate disputes better than lawyers </em>[Online]. Available at: <a href="https://www.ft.com/content/187525d2-9e6e-11e9-9c06-a4640c9feebb">https://www.ft.com/content/187525d2-9e6e-11e9-9c06-a4640c9feebb</a> (Accessed: 15<sup>th</sup> March 2020)

Chen, A. (2019) ‘How artificial intelligence can help us make judges less biased’, <em>The Verge</em>, 17<sup>th</sup> January [Online]. Available at: <a href="https://www.britannica.com/topic/Scottish-law">https://www.theverge.com/2019/1/17/18186674/daniel-chen-machine-learning-rule-of-law-economics-psychology-judicial-system-policy</a> (Accessed: 20<sup>th</sup> March 2020)

Chen, D.L. and Arnaud, P. (2020) <em>Clash of Norms: Judicial Leniency on Defendant Birthdays</em> [Online]. Available at: <a href="https://doi.org/10.1016/j.icj.2015.10.006">https://dx.doi.org/10.2139/ssrn.3203624</a> (Accessed: 20<sup>th</sup> March 2020)

Cohen, M.R. (1916) ‘The Place of Logic in the Law’. <em>Harvard Law Review</em>, 29 (6), pp. 622-639 [Online]. Available at: <a href="https://www.lawsociety.org.uk/support-services/lawtech/what-is-lawtech/">https://www.jstor.org/stable/1326498</a> (Accessed: 15<sup>th</sup> March 2020)

Council of Europe. (2010) <em>European Convention on Human Rights </em>[Online]. Available at: <a href="https://www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-environment">https://www.echr.coe.int/Documents/Convention_ENG.pdf</a> (Accessed: 12<sup>th</sup> March 2020)

Courts and Tribunals Judiciary. (Copyright 2020) <em>History of the Judiciary </em>[Online]. Available at: <a href="https://www.theguardian.com/society/2019/jan/29/more-than-half-young-people-jail-are-of-bme-background">https://www.judiciary.uk/about-the-judiciary/history-of-the-judiciary/</a> (Available at: 15<sup>th</sup> March 2020)

Law in Action, British Broadcasting Corporation [BBC]. (2020) <em>Workplace Law </em>[Online]. Available at: <a href="https://www.bbc.co.uk/sounds/play/m000gbjs">https://www.bbc.co.uk/sounds/play/m000gbjs</a> (Accessed: 25<sup>th</sup> March 2020)

Editors of Encyclopaedia Britain. (Copyright 2020) <em>Greek Law </em>[Online]. Available at: <a href="https://www.jstor.org/stable/1326498">https://www.britannica.com/topic/Scottish-law</a> (Accessed: 14<sup>th</sup> March 2020)

European Court of Human Rights [ECtHR], Council of Europe. (2020) <em>Rules of Court </em>[Online]. Available at: <a href="https://doi.org/10.1093/comjnl/bxx103">https://www.echr.coe.int/Documents/Rules_Court_ENG.pdf</a> (Accessed: 15<sup>th</sup> March 2020)

European Commission for the Efficiency of Justice [CEPEJ], Council of Europe. (Copyright 2020) <em>Council of Europe adopts first Ethical Charter on the use of Artificial Intelligence in judicial systems </em>[Online]. Available at: <a href="https://www.judiciary.uk/about-the-judiciary/history-of-the-judiciary/">https://www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-environment</a> (Accessed: 11<sup>th</sup> March 2020)

European Commission for the Efficiency of Justice [CEPEJ], Council of Europe. (2018) <em>European Ethical Charter for the Use of Artificial Intelligence in Judicial Systems and their Environment </em>[Online]. Available at: <a href="http://mediationblog.kluwerarbitration.com/2017/08/08/elementary-dear-watson/">https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c</a> (Accessed: 11<sup>th</sup> March 2020)

European Commission for the Efficiency of Justice [CEPEJ], Council of Europe (includes Rasnačs, Contissa and Leyenberger). (2018) <em>Justice of the future: predictive justice and artificial intelligence </em>[Online]. Available at: <a href="https://www.coe.int/en/web/cepej/justice-of-the-future-predictive-justice-and-artificial-intelligence">https://www.coe.int/en/web/cepej/justice-of-the-future-predictive-justice-and-artificial-intelligence</a> (Accessed: 11<sup>th</sup> March 2020)

Gomes, P.G. Sampaio, E.F. and Seixas, J.J. (2019) <em>Artificial Intelligence and the Judicial Ruling</em> [Online]. Available at: <a href="https://www.echr.coe.int/Documents/Rules_Court_ENG.pdf">http://www.ejtn.eu/PageFiles/17916/TEAM%20PORTUGAL%20I%20TH%202019%20D.pdf</a>(Accessed: 12<sup>th</sup> March 2020)

Gracia E. (2004) ‘Unreported cases of domestic violence against women: towards an epidemiology of social silence, tolerance, and inhibition’. <em>Journal of Epidemiology &amp; Community </em>Health, 58 (7), pp. 536-537 [Online]. Available at: <a href="https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c">http://dx.doi.org/10.1136/jech.2003.019604</a> (Accessed: 19<sup>th</sup> March 2020)

Gregoire, C. (2014) ‘A Field Guide to Anti-Technology Movements Past and Present’, <em>Huffington Post</em>, 17<sup>th</sup>January [Online]. Available at: <a href="https://www.huffpost.com/entry/life-without-technology-t_n_4561571">https://www.huffpost.com/entry/life-without-technology-t_n_4561571</a> (Accessed: 15<sup>th</sup> March 2020)

Grierson, J. (2019) ‘More than half of young people in jail are of a BME background’, <em>The Guardian</em>, 29<sup>th </sup>January [Online]. Available at: <a href="http://www.ejtn.eu/PageFiles/17916/TEAM%20PORTUGAL%20I%20TH%202019%20D.pdf">https://www.theguardian.com/society/2019/jan/29/more-than-half-young-people-jail-are-of-bme-background</a> (Accessed: 20<sup>th</sup> March 2020)

Harris, D.J. O’Boyle, M. and Warbrick, C. (1995) <em>Law of the European Convention on Human Rights. </em>London: Butterworths.

Howarth, E. (2018)<em> Overrepresentation in Criminal Justice Systems </em>[Online]. Available at: <a href="https://www.echr.coe.int/Documents/Convention_ENG.pdf">https://blogs.lse.ac.uk/lseupr/2018/01/25/overrepresentation-in-criminal-justice-systems/</a> (Accessed: 20<sup>th</sup>March 2020)

IBM Research. (Copyright 2020) <em>Many AI systems are trained using biased data </em>[Online]. Available at: <a href="http://dx.doi.org/10.1136/jech.2003.019604">https://www.research.ibm.com/5-in-5/ai-and-bias/</a> (Accessed: 20<sup>th</sup> March 2020

International Monetary Fund. (2019) <em>Report for Selected Countries and Subjects </em>[Online]. Available at: <a href="http://mediationblog.kluwerarbitration.com/2018/08/30/might-artificial-intelligence-mean-alternative-dispute-resolution/">https://www.imf.org/external/pubs/ft/weo/2019/02/weodata/index.aspx</a> (Accessed: 16<sup>th</sup> March)

Lee, H. (2010) <em>To Kill a Mockingbird</em>. 50<sup>th</sup> Edition. London: Arrow Books.

Long, C. (2019) <em>What is wrong with bribery and how, if at all, should we deal with it? </em>[Online]. Available at: <a href="https://www.trin.cam.ac.uk/wp-content/uploads/UK1-Long-Christopher-1.pdf">https://www.trin.cam.ac.uk/wp-content/uploads/UK1-Long-Christopher-1.pdf</a> (Accessed: 5<sup>th</sup> March 2020)

Maia, A. (2017) <em>Elementary My Dear Watson! </em>[Online]. Available at: <a href="https://www.trin.cam.ac.uk/wp-content/uploads/INT1-Teng-Jonathan-1.pdf">http://mediationblog.kluwerarbitration.com/2017/08/08/elementary-dear-watson/</a> (Accessed: 10<sup>th</sup> March)

Mania, K. (2015) ‘Online dispute resolution: The future of justice’. <em>International Comparative Jurisprudence</em>, 1 (1), pp. 76-86 [Online]. Available at: <a href="https://www.research.ibm.com/5-in-5/ai-and-bias/">https://doi.org/10.1016/j.icj.2015.10.006</a> (Accessed: 12<sup>th</sup> March 2020)

PEGA. (2019) <em>AI and Empathy </em>[Online]. Available at: <a href="https://www.imf.org/external/pubs/ft/weo/2019/02/weodata/index.aspx">https://www.pega.com/system/files/resources/2019-11/pega-ai-empathy-study.pdf</a> (Accessed: 11<sup>th</sup> March 2020)

Rogers, A. and South, J. (2018) <em>What Might Artificial Intelligence Mean For Alternative Dispute Resolution? </em>[Online]. Available at: <a href="https://dx.doi.org/10.2139/ssrn.3203624">http://mediationblog.kluwerarbitration.com/2018/08/30/might-artificial-intelligence-mean-alternative-dispute-resolution/</a> (Accessed: 10<sup>th</sup> March 2020)

Teng, J. (2019) <em>What is wrong with bribery and how, if at all, should we deal with it?</em> [Online]. Available at: <a href="https://blogs.lse.ac.uk/lseupr/2018/01/25/overrepresentation-in-criminal-justice-systems/">https://www.trin.cam.ac.uk/wp-content/uploads/INT1-Teng-Jonathan-1.pdf</a> (Accessed: 5<sup>th</sup> March 2020)

The Law Society, (Copyright 2020) <em>What is Lawtech? </em>[Online]. Available at: <a href="https://www.pega.com/system/files/resources/2019-11/pega-ai-empathy-study.pdf">https://www.lawsociety.org.uk/support-services/lawtech/what-is-lawtech/#</a> (Accessed: 11<sup>th</sup> March 2020)

Scott, J. (2015) Oxford Dictionary of Sociology [Online]. Available at: https://www.oxfordreference.com/view/10.1093/acref/9780199683581.001.0001/acref-9780199683581-e-2530(Accessed: 1st April 2020)


I BUILT MY SITE FOR FREE USING