Ensuring AI Accountability in Judicial Proceedings: An Actor–Network Theory Perspective
Abstract
Artificial Intelligence (AI) is increasingly being incorporated into judicial proceedings, from predictive algorithms for sentencing and risk assessment to AI-powered tools for case management. As AI continues to shape the legal landscape, questions regarding accountability in judicial proceedings become more pressing. This paper adopts an Actor–Network Theory (ANT) framework to explore the roles played by human and non-human actors—such as judges, lawyers, AI systems, and legal institutions—in establishing accountability in AI-driven legal processes. Through this lens, we examine the dynamics between these actors and the implications of AI’s role in legal decision-making. The study identifies key challenges surrounding AI accountability in judicial proceedings, highlighting the need for transparent and responsible AI development, while proposing pathways for integrating AI tools ethically and equitably into judicial processes.
Keywords
AI Accountability, Judicial Proceedings, Actor–Network TheoryHow to Cite
References
Agudo, Ujué, Karlos G. Liberal, Miren Arrese, and Helena Matute. 2024. The impact of AI errors in a human-in-the-loop process. Cognitive Research: Principles and Implications 9: 1–16. [Google Scholar] [CrossRef] [PubMed]
Andrés-Pueyo, Antonio, Karin Arbach-Lucioni, and Santiago Redondo. 2018. The RisCanvi. In Handbook of Recidivism Risk/Needs Assessment Tools. Oxford: John Wiley & Sons, pp. 255–68. [Google Scholar]
Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchne. 2016. Machine Bias. There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks. ProPublica. Available online: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (accessed on 14 November 2024).
Ashley, Kevin D. 2017. Artificial Intelligence and Legal Analytics. New Tools for Law Practice in the Digital Age. Cambridge: Cambridge University Press. [Google Scholar]
Association for Computing Machinery and US Public Policy Council. 2017. Statement on Algorithmic Transparency and Accountability. Washington, DC: Association for Computing Machinery and US Public Policy Council. [Google Scholar]
Atchison, Amy B., Lawrence Tobe Liebert, and Debuse K. Russell. 1999. Judicial Independence and Judicial Accountability: A selected bibliography. Southern California Law Review 72: 723–810. [Google Scholar]
Bellio, Naiara. 2021. In Catalonia, the RisCanvi Algorithm Helps Decide Whether Inmates Are Paroled. algorithmwatch.org. Available online: https://algorithmwatch.org/en/riscanvi/ (accessed on 14 November 2024).
Bijker, Wiebe E., and John Law, eds. 1992. Shaping Technology Building Society. Studies in Sociotechnical Change. Cambridge, MA: The MIT Press. [Google Scholar]
CCJE. 2023. Compilation of Responses to the Questionnaire for the Preparation of the CCJE Opinion No. 26 (2023) “Moving Forward: Use of Modern Technologies in the Judiciary”. Strasbourg: Council of Europe. [Google Scholar]
Chiao, Vincent. 2019. Fairness, accountability and transparency: Notes on algorithmic decision-making in criminal justice. International Journal of Law in Context 14: 126–39. [Google Scholar] [CrossRef]
Contini, Francesco. 2020. Artificial Intelligence and the Transformation of Humans, Law and Technology Interactions in Judicial Proceedings. Law, Technology and Humans 2: 4. [Google Scholar] [CrossRef]
Contini, Francesco, and Giovan Francesco Lanzara. 2014. The Circulation of Agency in E-Justice. Interoperability and Infrastructures for European Transborder Judicial Proceedings. Berlin/Heidelberg: Springer. [Google Scholar]
Czarniawska, Barbara. 2004. On time, space, and action nets. Organization Studies 11: 773–91. [Google Scholar]
Czarniawska, Barbara, and Bernward Joerges. 1998. The Question of Technology, or How Organizations Inscribe the World. Organisation Studies 19: 363–85. [Google Scholar]
DeBrusk, Chris. 2018. The Risk of Machine-Learning Bias (and How to Prevent It). MIT Sloan Management Review. Available online: https://sloanreview.mit.edu/article/the-risk-of-machine-learning-bias-and-how-to-prevent-it/ (accessed on 14 November 2024).
Dhungel, Anna-Katharina, and Eva Beute. 2024. AI Systems in the Judiciary: Amicus Curiae? Interviews with Judges on Acceptance and Potential Use of Intelligent Algorithms. Paper presented at ECIS 2024, Paphos, Cyprus, June 13–19. [Google Scholar]
Diakopoulos, Nicholas. 2016. Accountability in Algorithmic Decision Making. Communications of the ACM 59: 56–62. [Google Scholar] [CrossRef]
Dieterich, William, William L. Oliver, and Tim Brennan. 2014. COMPAS Core Norms for Community Corrections. Northpoint. 97. Available online: https://archive.epic.org/algorithmic-transparency/crim-justice/EPIC-16-06-23-WI-FOIA-201600805-WIDOC_DCC_norm_report021114.pdf (accessed on 14 November 2024).
Digital Future Society. 2023. Algorithms in the Public Sector: Four Case Studies of ADMS in Spain. Barcelona: Digital Future Society. [Google Scholar]
ENCJ. 2018. Independence, Accountability and Quality of the Judiciary. Adopted General Assembly Lisbon, 1 June 2018. European Network of Councils for the Judiciary. Bruxelles: ENCJ.
Equivant. 2017. Northpointe Specialty Courts Manage Your Treatment Docket. Northpoint. Available online: http://www.equivant.com/wp-content/uploads/Northpointe_Specialty_Courts.pdf (accessed on 14 November 2024).
Equivant. 2019. Practitioner’s Guide to COMPAS Core. Northpoint. Available online: https://archive.epic.org/algorithmic-transparency/crim-justice/EPIC-16-06-23-WI-FOIA-201600805-COMPASPractionerGuide.pdf (accessed on 14 November 2024).
License
Copyright (c) 2025 Song Yun-ah

This work is licensed under a Creative Commons Attribution 4.0 International License.