
WELCOME
Your guide to the ethical, legal and regulatory implications of AI
Search Results
14 results found with an empty search
Blog Posts (13)
- AI's Role in Law: Drawing Clear Boundaries for Democratic Society
This article deals with the relationship between legal work and artificial intelligence from the perspective of a person who has been working in AI governance and is a legal professional. It aims to explore what impact do generative AI tools specifically have on the legal profession and what part of any judicial work should be done by machines. It focuses on the different aspects of the legal work that GenAI could influence and asks the question what limitations should be imposed on AI in law. I argue that law should remain an exclusively human domain when it comes to core legal functions - drafting, enacting, interpretation, and judgment. While legal professionals may leverage AI tools, their use must be carefully bounded. Technical Limitations of AI in Legal Understanding LLMs fundamentally don't understand the context and the underlying social, political, and economic reasons for laws, judgments, and precedents. Laws and regulations are created based on political platforms - minimum wages, employee protection, and data privacy are examples of "social state" politics that are based on specific values. While LLMs are trained on vast amounts of data, they cannot comprehend how policies are created and formed in people. Humans tend to have the ability to make connections between seemingly unrelated topics in a way machines cannot do that. AI relies mostly on a statistical analysis of the data, whereas humans draw upon complex contextual and "natural" understanding of how society functions to make their decisions or recommendations. This human ability is particularly relevant in law, because it involves detailed understanding of language, politics, philosophy, ethics and many other spheres of life. Good lawyers have a reasonable chance of getting the consequences of complex situations right, while AI is nowhere near capable enough to be trusted with such judgements. My own experience with the major GenAI tools shows me that they are like a rookie paralegal that knows everything, but cannot make any connections between their knowledge and real-world issues. And that is a limitation that does not look like it's going to be solved any time soon. Verification of AI's legal reasoning presents another significant challenge. Legal questions may have different answers in the same jurisdiction depending on small changes in facts, let alone across multiple jurisdictions. Recent cases demonstrate these limitations - for instance, ChatGPT's poor performance in straightforward trademark matters shows that even basic legal reasoning remains beyond AI's current capabilities In a court case in the Netherlands, it showed that it is inconsistent in its reasoning and it could not interpret law in a way that is in any way good enough for a real-life situation before a judge. Source: https://www.scottishlegal.com/articles/dr-barry-scannell-chatgpt-lawyer-falls-short-of-success-in-trademark-case Democratic and Accountability Framework The transparency and accountability implications of AI in law are profound. While humans make mistakes in creating laws, they can be held accountable both politically and legally. Democratic society enables discussions on important topics that expose the motives behind politicians' decisions. As AI systems become more complex, their decision-making becomes increasingly opaque. Since machines cannot (yet) be held accountable in the same way humans can, delegating important decision-making to artificial intelligence would undermine democratic principles. Furthermore, considering how a few enormous companies have taken control over AI technology, allowing AI into core legal functions would effectively place our legal system in their hands - a shift that threatens democratic principles and risks creating an authoritarian framework. Especially now seeing how X and Meta are approaching the issue of "freedom of speech", it becomes abundantly clear that decisions critical for society are not made via representative of the people, they are made by private actors with no regard for democracy or rule of law. The Human Nature of Law Law, as part of the humanities, is fundamentally about understanding the human condition - something only biological beings can truly comprehend. Legal professionals develop crucial skills and insights through their research and practice that no AI system can replicate. Just as calculator dependence has diminished arithmetic skills, over-reliance on AI in legal work would be detrimental to future generations of lawyers, but with far more serious consequences for society. Legal work is not only like the movies, where a brilliant lawyer argues a case spectacularly in front of judge. The majority of the legal work is done outside of courtrooms and it's important to understand it's crucial part of a democratic society. Outsourcing critical legal decisions to a non-human entity means that we would be outsourcing one of the most critical pillars of democracy - namely people's rights and freedoms - to machines that, as we have established already, have no real understanding of the world. They don't have the concept of fairness, equity and equality coded into them. Furthermore, leaving important decisions to AI systems means that we basically are handling over the judicial systems to the companies creating those systems, creating a huge hole in our democratic oversight. AI systems may produce biased or unclear answer, due to their overreliance on training data and the (very) possible lack of explainability of the decision-making. Defining Clear Boundaries for AI in Legal Work The role of AI in legal work should follow a clear hierarchical structure: Pure Administrative Tasks (Full AI Utilization Permitted): Document filing and organization Basic text editing and formatting Calendar management Simple template filling Repository maintenance Hybrid Tasks (AI as Transparent Assistant): Legal research combining search and preliminary analysis Initial contract review for standard terms Template creation for complex legal documents Case law database management For these tasks, AI must provide clear documentation of its methodology, sources, and reasoning. Legal professionals retain oversight and decision-making authority, with AI serving as a tool for initial analysis rather than final decisions. Pure Legal Work (Exclusively Human Domain): Legal strategy development Final interpretation of laws and precedents Judicial decision-making Complex negotiation Legislative drafting These boundaries would help legal professionals to take advantage of the new technologies and use them to become better at their work. On the other hand, vital democratic and judicial principles will still be upheld and accountability of decision-making and algorithmic bias will be dealt with. And on the topic of bias - while both humans and AI systems exhibit biases, human biases can be addressed through training, conscious effort, and professional ethics frameworks. Human decision-making processes can be scrutinized and challenged through established legal and professional channels - something not possible with AI systems. Moreover, automation bias could lead lawyers to overly rely on AI suggestions, potentially missing novel legal arguments or interpretations that might better serve justice. Conclusion The debate around AI in legal systems isn't just about technological capabilities - it's about the future of democratic governance itself. The limitations of AI in legal contexts extend far beyond technical constraints to touch the very foundations of how we create and maintain just societies. If we allow AI to penetrate core legal functions, we risk creating a dangerous precedent where complex human decisions are delegated to unaccountable systems. Instead of asking how we can integrate AI into legal decision-making, we should be asking how we can leverage AI to support and enhance human legal expertise while maintaining clear boundaries. This means developing explicit frameworks that define where AI assistance ends and human judgment must prevail. The legal profession has an opportunity - and responsibility - to lead by example in showing how emerging technologies can be embraced thoughtfully without compromising the essentially human nature of professional judgment. The future of law doesn't lie in AI replacement, but in human professionals who understand both the potential and limitations of AI tools - and who have the wisdom to keep them in their proper place.
- Social Scoring 2
In an age where artificial intelligence and data-driven governance shape our daily lives, social scoring systems have emerged as one of the most debated innovations. These systems use algorithms to assign individuals scores based on their behavior, aiming to incentivize actions that align with societal goals such as trustworthiness and cooperation. While proponents highlight their potential to foster community trust and economic development, critics caution against their capacity to erode privacy, amplify inequalities, and institutionalize surveillance. Social scoring systems rely on vast amounts of behavioral data, which are processed by algorithms to generate scores. These scores, often used to assess trustworthiness, influence access to resources, opportunities, and social standing. Experiments have shown that such systems can boost trust and wealth generation within communities, particularly when their mechanisms are transparent. However, transparency has its downsides. Individuals with lower scores frequently face ostracization and reduced opportunities, exposing the discriminatory potential of these systems. Transparency, while fostering procedural fairness, paradoxically enables reputational discrimination by giving others the tools to rationalize punitive actions against low-scoring individuals. The ethical dilemmas of social scoring extend far beyond transparency. These systems operate in a delicate space where privacy and agency are often compromised. Their reliance on extensive data collection creates a thin line between governance and surveillance. In jurisdictions with weak regulatory frameworks, this surveillance risks becoming a normalized aspect of everyday life, leading to diminished public trust and stifled dissent. Moreover, the opaque decision-making processes in many social scoring implementations exacerbate the problem. When individuals cannot contest or understand how their scores are calculated, the lack of accountability undermines fairness and reinforces systemic biases, particularly against marginalized groups. Efforts to regulate such systems are underway, most notably with the European Union’s AI Act. This legislative framework explicitly prohibits social scoring practices that result in unjustified disparities, classifying them as posing “unacceptable risks.” The Act introduces the concept of “contextual integrity,” requiring scores to be used strictly within the domains for which they were generated. However, while these measures are a step in the right direction, enforcing them remains a significant challenge. Operational gaps persist, leaving room for exploitation and misuse, especially when private entities employ these systems for purposes beyond their original intent. The paradox of transparency in social scoring systems lies in its ability to simultaneously foster and harm. On one hand, transparency increases trust within communities and makes systems appear more legitimate. On the other hand, it also sharpens the social stratification inherent in these mechanisms. Individuals with higher scores are systematically trusted more, while those with lower scores endure social and economic penalties. This dynamic risks perpetuating inequities rather than addressing them. The societal implications of social scoring extend beyond individual experiences. Communities subjected to transparent scoring systems often show reduced inequality and increased collective wealth. However, these aggregate benefits do not erase the harm experienced by low-scoring individuals, who often find themselves locked out of opportunities. The broader societal stratification created by these systems mirrors historical patterns of exclusion, now amplified by the precision of algorithmic governance. As these systems become more prevalent, their regulation must evolve to ensure fairness and inclusivity. While the EU AI Act provides a solid foundation, global frameworks are essential to address disparities in regulation and enforcement. Moreover, the design of social scoring systems must prioritize protecting the rights of vulnerable populations, ensuring that the potential benefits do not come at the cost of individual dignity. The development and implementation of social scoring systems bring us to a critical crossroads. The question is not merely whether we can build such systems but whether we should—and under what conditions. Striking the right balance between innovation and ethics will determine whether social scoring systems become tools for societal empowerment or mechanisms of control. As we shape these technologies, we must ensure they align with the principles of fairness, accountability, and human rights, keeping the well-being of all individuals at their core.
- Follow up to Petko Getov's Bias article.
In one of his articles Petko Getov provides an overview of what is Ethics in AI. Linked here: To follow up on his point I found this interesting example of how training data sets can solidify biases that are very difficult to overcome. The examples I will use are two – watches and people writing. As you can see below – these are AI generated watches – all of them are showing 10:10, but all the prompts used have requested 12:02 to be displayed. Why is that? Well 99% of the watches in every website on the web shows the watches it sells with this time 10:10 as it is the most presentable and it displays the beauty of the design. And when you have trained your image generator – a very strong connection has been established that if you have a watch is has to display 10:10. How can we sort this problem? To iterate the point made in the article above - Human-Centricity : There must be a human in the loop, there must be prioritization on the impact of the system on the end user and to generate value for the Humans. Another example is a human writing – always the write with their right hand. Again – most of the images of people writing are of people who are right handed as this is the statistical distribution of humans – most are right handed. Now these are obvious problems. But this can be extrapolated to various fields of AI application – credit scoring, social scoring and so on and so on. Bias is part of human nature and it is reflected in the web – the way the web is a mirror of our society. AI is amazing, but it needs to serve us for good!
Other Pages (1)
- About | EthicsandlawinAI
Home Read More Our Team. We are Petko Getov and Yordan Vasilev. We are AI ethics and governance professionals. Yordan Vasilev Tech Legal Professional Petko Getov Tech and AI Lawyer