Impacts

#7
Reading Time

10MIN

30 Sep 2024

Law and AI

The rapid rise of Artificial Intelligence (AI) has become one of the defining technological developments of the twenty-first century, with the potential to profoundly impact all sectors of society, including the legal sector. The AI research conducted across the Max Planck Law network includes the following broad areas of inquiry: the transparency and fairness of AI in judicial reasoning and decision-making, the ethical implications and accountability of AI systems, the risks posed by biased or inaccurate AI-generated information, national and international regulatory approaches to AI, and the broader impact of AI on the rule of law and democracy. This research aims to critically evaluate AI’s potential and limitations, ensuring its use is both safe and equitable across various legal contexts, while addressing the ethical and legal challenges it presents.

Fellow Group

The Max Planck Fellow Group ‘Algorithmic Profiling and Automated Decision-Making in Criminal Justice’, jointly led by Institute Director Professor Dr Tatjana Hörnle and Professor Dr Sabine Gless , is one of the most focused research projects on law and AI. The Group’s current work tackles issues arising from the use of automated algorithms in criminal law, including the ethical implications of predicting criminal behaviour, AI’s role in reducing judicial discretion in sentencing, and the question of whether AI can be held criminally responsible, independent of its human programmers. The wide range of topics explored by the group enters easily into conversation with the leading literature on Law and AI.

Sabine Gless

Book publication

Professor Gless, together with Professor Whalen-Bridge (National University of Singapore), recently published a book titled Human-Robot Interaction in Law and Its Narratives: Legal Blame, Responsibility, and Criminal Law with Cambridge University Press (open access). The book addresses how legal systems are unprepared for the challenges posed by robots, analysing issues in substantive law, procedural law, and legal narratives. It explores topics such as criminal liability in human-robot interactions, the possibility of robots serving as witnesses in court, and evidentiary concerns related to robot-generated data. The book also examines privacy, reliability, and the need for new legal frameworks to better understand and regulate human-robot interactions.

Lawcast

In episodes six and seven of the Max Planck Lawcast, titled ‘Artificial Intelligence in Crime Control and Criminal Justice’ (Spotify | Apple | SSRN), Linus Ensel, a member of the Max Planck Fellow Group above, along with Christian Thönnes and guests, explored the definitions, limits, and possibilities of ‘predictive justice’ and ‘predictive policing’. Predictive justice uses AI to assess flight risk, recidivism, and guide sentencing by analysing past jurisprudence, aiming for fairness through consistent reasoning. Predictive policing uses algorithms to analyse data and prevent crime. The Lawcast raises concerns about creating transparent, unbiased AI, warning of its potential to increase state surveillance and coercion. These issues could impact the rule of law, reinforce discrimination, and erode democracy.

blaming algorithms?

Regarding fairness and transparency, Nina Grgić-Hlača , Research Fellow and one of the organizers of the Max Planck Law | Tech | Society Initiative, has devoted a series of papers to algorithmic fairness and the interaction between algorithms and the Law. In a paper co-authored with Gabriel Lima (Max Planck Law | Tech | Society) et al, titled ‘The Conflict Between Explainable and Accountable Decision-Making Algorithms,’ the co-authors critique the use of Explainable AI (XAI) in high-stakes decisions, such as health care enrolment and hiring. They question whether XAI can effectively solve the responsibility issues posed by autonomous AI systems, arguing that post-hoc explanations may obscure developers’ accountability by positioning algorithms as blameworthy agents. They also warn that XAI could lead to the misattribution of responsibility to vulnerable stakeholders, like patients or users, due to a false perception of control over these systems. The paper concludes with recommendations for addressing this tension between explainability and accountability in AI-driven decision-making.

Nina Grgić-Hlača 

UN Resolution

On the subject of regulation, Annika Knauer (Research Fellow) has a EJIL:Talk! blog article, ‘The First United Nations General Assembly Resolution on Artificial Intelligence’, which closely examines what is now Resolution 78/265 (21 March 2024). It seeks to ensure AI’s global use is consistent with human rights, but diverging interests among states complicate international regulation. The US focuses on fostering AI innovation, while the EU emphasizes data privacy and regulation. African states struggle with AI accessibility, as biased data from the Global North often renders the technology ineffective for local needs. Resolution 78/265, though non-binding, calls for cooperation to close the digital divide and promotes AI use aligned with human rights law. The resolution notably stresses inclusive access to AI for developing countries and marks a symbolic but important first step towards balancing AI’s benefits with global ethical and legal standards.

Annika Knauer

Another important take on AI regulation was provided by Dr Daria Kim, Professor Dr Josef Drexl, Professor Dr Dr hc Reto Hilty, and Peter R Slowinski who, in their position statement ‘Artificial Intelligence Systems as Inventors?’, critically assess the 2021 decision by the Federal Court of Australia, which recognized an AI system, DABUS, could be an inventor under Australian patent law. The authors argue that this ruling was based on unverified assumptions about AI’s technical capabilities and overlooked key legal issues, such as the consequences of granting inventorship to entities without legal capacity. They stress that recognizing AI as inventors requires careful analysis of both factual and legal questions, which the decision failed to address. Their analysis offers insights relevant to any jurisdiction grappling with AI inventorship under patent law.

Position Statement

Emergence in AI

More recently, in her Perspectives article What is Emerging in Artificial Intelligence Systems’, Dr Kim goes on to examine the concept of emergence in AI and how it is often misunderstood or mystified. She critiques the tendency to anthropomorphize AI by attributing human-like qualities such as intelligence, creativity, and agency to machine learning models. She argues that these perceived emergent behaviours in AI, while intriguing, are actually weakly emergent and fully explainable by the system’s underlying processes, such as neural networks. Rather than possessing autonomy or agency, AI functions are the result of interactions between lower-level components and mathematical rules. Dr Kim stresses the need to demystify these functions and to focus on understanding the technical foundations of AI, which is crucial for addressing its legal, ethical, and societal implications. 

Daria Kim

legal history

Anselm Küsters

Turning to Legal History, Dr Anselm Küsters (Affiliate Researcher) has an article in ‘Die Funzel’ titled ‘Artificial Intelligence, Law and Legal History’, which reflects a lecture he regularly gives at the MPI-Frankfurt summer school. It explores AI’s growing influence on legal scholarship, particularly in generating and interpreting historical documents. Küsters emphasizes the need for legal scholars and historians to adapt their methods to ensure authenticity in the age of AI.

Rômulo da Silva Ehalt

More recently, Dr Rômulo da Silva Ehalt , in a blog post for ‘Legal History Insights’, highlights the challenges posed by AI-generated fake images and documents. As AI becomes more capable of producing realistic historical materials, the risk of disinformation increases. To address this, Dr da Silva Ehalt advocates for stronger verification processes, digitization of archives, and enhanced training for historians in document authentication.

Concluding Remarks

The Max Planck Law network’s research highlights the diverse challenges and opportunities that AI presents to the legal field. From transparency and fairness in decision-making to privacy, bias, and regulatory concerns, AI is reshaping legal practice. This ongoing research contributes to a safer and more equitable integration of AI into the legal landscape.

You've reached the end. Keep reading
Impacts #5

Legal Protection of Democracy

Read more