top of page

AI's Role in Law: Drawing Clear Boundaries for Democratic Society

Writer: Petko GetovPetko Getov



This article deals with the relationship between legal work and artificial intelligence from the perspective of a person who has been working in AI governance and is a legal professional. It aims to explore what impact do generative AI tools specifically have on the legal profession and what part of any judicial work should be done by machines. It focuses on the different aspects of the legal work that GenAI could influence and asks the question what limitations should be imposed on AI in law. I argue that law should remain an exclusively human domain when it comes to core legal functions - drafting, enacting, interpretation, and judgment. While legal professionals may leverage AI tools, their use must be carefully bounded.

Technical Limitations of AI in Legal Understanding

LLMs fundamentally don't understand the context and the underlying social, political, and economic reasons for laws, judgments, and precedents. Laws and regulations are created based on political platforms - minimum wages, employee protection, and data privacy are examples of "social state" politics that are based on specific values. While LLMs are trained on vast amounts of data, they cannot comprehend how policies are created and formed in people. Humans tend to have the ability to make connections between seemingly unrelated topics in a way machines cannot do that. AI relies mostly on a statistical analysis of the data, whereas humans draw upon complex contextual and "natural" understanding of how society functions to make their decisions or recommendations.

 

This human ability is particularly relevant in law, because it involves detailed understanding of language, politics, philosophy, ethics and many other spheres of life. Good lawyers have a reasonable chance of getting the consequences of complex situations right, while AI is nowhere near capable enough to be trusted with such judgements. My own experience with the major GenAI tools shows me that they are like a rookie paralegal that knows everything, but cannot make any connections between their knowledge and real-world issues. And that is a limitation that does not look like it's going to be solved any time soon.

 

Verification of AI's legal reasoning presents another significant challenge. Legal questions may have different answers in the same jurisdiction depending on small changes in facts, let alone across multiple jurisdictions. Recent cases demonstrate these limitations - for instance, ChatGPT's poor performance in straightforward trademark matters shows that even basic legal reasoning remains beyond AI's current capabilities In a court case in the Netherlands, it showed that it is inconsistent in its reasoning and it could not interpret law in a way that is in any way good enough for a real-life situation before a judge. Source: https://www.scottishlegal.com/articles/dr-barry-scannell-chatgpt-lawyer-falls-short-of-success-in-trademark-case  

 

Democratic and Accountability Framework

The transparency and accountability implications of AI in law are profound. While humans make mistakes in creating laws, they can be held accountable both politically and legally. Democratic society enables discussions on important topics that expose the motives behind politicians' decisions. As AI systems become more complex, their decision-making becomes increasingly opaque. Since machines cannot (yet) be held accountable in the same way humans can, delegating important decision-making to artificial intelligence would undermine democratic principles.

Furthermore, considering how a few enormous companies have taken control over AI technology, allowing AI into core legal functions would effectively place our legal system in their hands - a shift that threatens democratic principles and risks creating an authoritarian framework. Especially now seeing how X and Meta are approaching the issue of "freedom of speech", it becomes abundantly clear that decisions critical for society are not made via representative of the people, they are made by private actors with no regard for democracy or rule of law.

 

The Human Nature of Law

Law, as part of the humanities, is fundamentally about understanding the human condition - something only biological beings can truly comprehend. Legal professionals develop crucial skills and insights through their research and practice that no AI system can replicate. Just as calculator dependence has diminished arithmetic skills, over-reliance on AI in legal work would be detrimental to future generations of lawyers, but with far more serious consequences for society. Legal work is not only like the movies, where a brilliant lawyer argues a case spectacularly in front of judge. The majority of the legal work is done outside of courtrooms and it's important to understand it's crucial part of a democratic society. Outsourcing critical legal decisions to a non-human entity means that we would be outsourcing one of the most critical pillars of democracy - namely people's rights and freedoms - to machines that, as we have established already, have no real understanding of the world. They don't have the concept of fairness, equity and equality coded into them. Furthermore, leaving important decisions to AI systems means that we basically are handling over the judicial systems to the companies creating those systems, creating a huge hole in our democratic oversight. AI systems may produce biased or unclear answer, due to their overreliance on training data and the (very) possible lack of explainability of the decision-making.

 

Defining Clear Boundaries for AI in Legal Work

The role of AI in legal work should follow a clear hierarchical structure:

  1. Pure Administrative Tasks (Full AI Utilization Permitted):

    • Document filing and organization

    • Basic text editing and formatting

    • Calendar management

    • Simple template filling

    • Repository maintenance

  2. Hybrid Tasks (AI as Transparent Assistant):

    • Legal research combining search and preliminary analysis

    • Initial contract review for standard terms

    • Template creation for complex legal documents

    • Case law database management

      For these tasks, AI must provide clear documentation of its methodology, sources, and reasoning. Legal professionals retain oversight and decision-making authority, with AI serving as a tool for initial analysis rather than final decisions.

  3. Pure Legal Work (Exclusively Human Domain):

    • Legal strategy development

    • Final interpretation of laws and precedents

    • Judicial decision-making

    • Complex negotiation

    • Legislative drafting

 

These boundaries would help legal professionals to take advantage of the new technologies and use them to become better at their work. On the other hand, vital democratic and judicial principles will still be upheld and accountability of decision-making and algorithmic bias will be dealt with. And on the topic of bias - while both humans and AI systems exhibit biases, human biases can be addressed through training, conscious effort, and professional ethics frameworks. Human decision-making processes can be scrutinized and challenged through established legal and professional channels - something not possible with AI systems. Moreover, automation bias could lead lawyers to overly rely on AI suggestions, potentially missing novel legal arguments or interpretations that might better serve justice.

 

Conclusion

The debate around AI in legal systems isn't just about technological capabilities - it's about the future of democratic governance itself. The limitations of AI in legal contexts extend far beyond technical constraints to touch the very foundations of how we create and maintain just societies. If we allow AI to penetrate core legal functions, we risk creating a dangerous precedent where complex human decisions are delegated to unaccountable systems.

Instead of asking how we can integrate AI into legal decision-making, we should be asking how we can leverage AI to support and enhance human legal expertise while maintaining clear boundaries. This means developing explicit frameworks that define where AI assistance ends and human judgment must prevail. The legal profession has an opportunity - and responsibility - to lead by example in showing how emerging technologies can be embraced thoughtfully without compromising the essentially human nature of professional judgment.

The future of law doesn't lie in AI replacement, but in human professionals who understand both the potential and limitations of AI tools - and who have the wisdom to keep them in their proper place.

 
 
 

Comentarios


bottom of page