Traditionally, legal research and judicial decisions are performed by legally certified, skilled humans. Artificial intelligence is supporting and enhancing these processes by introducing text analysis tools, case comparison, document review at scale, and accurate case predictions among other things. Where are the technical and ethical boundaries of machine-enhanced judicial decision-making? And, how long until large language models interpret and explain laws and societal norms better than legally certified, skilled humans do?
tl;dr
This paper examines the evolving role of Artificial Intelligence (AI) technology in the field of law, specifically focusing on legal research and decision-making. AI has emerged as a transformative tool in various industries, and the legal profession is no exception. The paper explores the potential benefits of AI technology in legal research, such as enhanced efficiency and comprehensive results. It also highlights the role of AI in document analysis, predictive analytics, and legal decision-making, emphasizing the need for human oversight. However, the paper also acknowledges the challenges and ethical considerations associated with AI implementation, including transparency, bias, and privacy concerns. By understanding these dynamics, the legal profession can leverage AI technology effectively while ensuring responsible and ethical use.
Make sure to read the full paper titled The Role of AI Technology for Legal Research and Decision Making by Md Shahin Kabir and Mohammad Nazmul Alam at https://www.researchgate.net/publication/372790308_The_Role_of_AI_Technology_for_Legal_Research_and_Decision_Making

I want to limit this post to the most interesting facet of this paper: (1) machine learning as a means to conduct legal research and (2) expert systems to execute judicial decisions.
The first part refers to the umbrella term machine learning, which in the legal profession comes down to predictive or statistical analysis. In other words, ML is a method to ingest vast amounts of legal and regulatory language, analyze, classify, and label it against a set of signals. For example, think about all laws and court decisions concerning defamation that were ever handed down. Feed the statistical means into your ML system and deploy it against a standard intake of text looking to identify (legally) critical language. Of course, this is an exaggerated example, but perhaps not as far-fetched as it seems.
The second part refers to the creation of decision support systems, which – as far as we understand the author’s intent here – are designed to be the result of the aforementioned ML engagement that is tailored to the situation and, ideally, executed autonomously. It helps humans to identify potential legal risks. It helps to shorten the time required to overview an entire, complex case. If set and deployed accurately, these decision support systems could become automated ticket systems upholding the rule of law. That is a big if.
One of the challenges for this legal technology is algorithmic hallucinations or simply put – a rogue response. These appear to take place without warning or noticeable correlation. They are system errors that can magnify identity or cultural biases. This raises ethical questions and liability for machine mistakes. Furthermore, it raises questions of accountability and the longevity of agreed-upon social norms. Will a democratic society allow its norms, judicial review, and decision-making to be delegated to algorithms?
For some reason, this paper is labeled August 2023 when in fact it was first published in 2018. I only discovered this after I started writing. ROSS Intelligence has been out of business since 2021. Their farewell post “Enough” illustrates another challenging aspect of AI, legal research, and decision-making: access.