
WELCOME
Your guide to the ethical, legal and regulatory implications of AI
Search Results
15 results found with an empty search
Blog Posts (14)
- AI without ethics stifles innovation
Innovation can be enhanced by ethics and law While legal instruments like the EU AI Act, US and Chinese regulations, as well as governance frameworks like the NIST AI Framework focus on managing risks from existing AI systems, they often operate reactively—addressing problems after technologies have been deployed. This article proposes a fundamental shift: integrating ethics into the earliest stages of AI development rather than merely imposing restrictions afterward. By examining historical lessons from pharmaceutical, nuclear, and biotechnology governance, I argue that AI systems developed without foundational ethical frameworks cannot truly benefit society. More importantly, I propose concrete mechanisms to transform governance from a protective measure into an innovative force that rewards ethical foresight. There are many governance frameworks and now also laws already in existence, and all of them focus on managing the risk from AI systems in one way or the other - the EU AI Act is a product safety regulation, the NIST AI framework creates a guideline risk management, the White House Executive Order has a similar objective. Their idea is protection. It's a noble idea and it is indeed absolutely necessary. But it deals with the risks post factum. Most of these frameworks work under the assumption that e.g. ChatGPT is already there, it is being used and we need to somehow frame it. I would like to shift to a different perspective - a more philosophical and ethical one, however with very practical consequences: do we consider why we create AI and how is it aligned ethically with our values? I have not seen this question being asked in most frameworks, because they are meant to be practical and help the businesses grow their already created products. But what if we ask those questions BEFORE a product is created? What if we introduce a basic requirement that an AI use case must be ethically vetted, before it's creation, before the data is being trained, before parameters and weights are being introduced? Olivia Gambelin has already talked about this in her course "AI Ethics" (which I highly recommend) - ethics can indeed be used as an innovation tool AT THE START OF the creation of a new technology. She gives the example of the app "Signal", which uses a privacy by design concept, so before the product was created, it already included the ethical consideration that other similar communication services like WhatsApp are lacking. And that got me thinking: shouldn't all companies actually do that? Should we encourage our innovators to be ethical to begin with, not only scare them with fines when they misbehave, but reward them when they do well from the start? My point is that we need to really explore way more ethical and philosophical discussions about AI. Think about it: what if regulation was not there only to protect, but to incentivize companies to include ethical and philosophical considerations from the start? For example, introduce a basic new step in the AI Act that requires companies that create or use AI systems to look not only at the efficiency gains of their business model, but at the consequences the systems might have for society. What if regulation was not there only to protect, but to incentivize companies to include ethical considerations from the start? That could include an "Ethics Review Boards" within the AI Office that has real regulatory power and that can require companies to perform "social impact assessment" of their product at a very early stage of development. A company can show that the results of such assessment are a socially desirable and positive outcome (e.g. privacy by design like Signal; company reveals its datasets, proving its variety and showing how bias is tackled at the first steps of development; a chatbot that discloses all the copyrighted material used and provides evidence that it has licensed all those materials). The reward for such behavior could be a certification of excellence by the EU, certifying that this product is of the highest quality and is recommended for usage compared to other similar products that are only compliant with the strictest requirements, but have not done anything on the ethical side. This is not unheard of - the pharma industry, biotechnology and nuclear technology have all undergone a similar development in the past. All of these industries and technologies started off without much or any specific regulation, then either a huge disaster was caused by them (e.g. Hiroshima/Nagasaki for nuclear energy; the Elixir sulfanilamide disaster in 1937 for pharma was a similar event) or questions were raised soon enough in the case of biotechnology, so that social and ethical concerns were taken into account as part of its development. After such events, governance efforts started and now all of these three examples I gave are highly-regulated industries and society is benefiting from that (e.g. better healthcare and medicine; higher quality research in biotech; nuclear disarmament). All these examples show a pattern in how transformative technology is normally governed: there is an initial phase of rapid and uncontrolled development, then often enough disaster strikes and then there is reactive regulation, which develops into a mature governance process. I argue that for AI, we need to be proactive, and so regulations like the EU AI Act are actually a good thing in principle, even if the details may not yet be that great. The Asilomar Conference of 1975 provides a compelling precedent for proactive ethical governance. When scientists first developed the ability to splice DNA between organisms, they didn't wait for a disaster before establishing guidelines. Instead, leading researchers voluntarily paused their work and established a framework for safe development. This didn't slow progress – it enabled it by building public trust and clear guidelines. Following the Asilomar model, companies developing AI systems could (1) establish clear ethical review processes before beginning development; (2) create transparent documentation of ethical considerations; (3) engage multiple stakeholders in review processes; (4) define clear boundaries for development. Such process would take into account ethical and social concerns which show us in a very specific way what the added value of a given AI system is. The question whether an "intelligent" system is actually contributing to a better society, or is it a cash cow exercise can become much easier to answer. The aforementioned Ethics Review Board could be a part of the AI Office within its responsibility of ensuring Trustworthy AI. It should be staffed by a variety of experts from different fields, focusing on humanities, with the technical experts active in other areas of the AI Office, such as benchmarks creation. It would issue binding recommendations that the AI Office will publish and that should provide the requirements for companies to get a "certificate of excellence". I assume an adjustment of the AI Act would be needed in order to have all the details right. However, I believe that this is a noteworthy initiative, considering all the factors and the ultimate benefits for society. An initial ethical and philosophical review of our systems would not only be better for the majority of people, it would actually be advantageous for the business itself - it has already been shown that ethics can create a competitive advantage and brand loyalty towards a product (Signal vs. WhatsApp is one example - after WhatsApp released its updated T&Cs a few years ago, Signal got millions of new users within a few days due to its stringent approach to data privacy). Also, considering ethical issues at the start helps a lot with compliance, as regulations are merely the foundation upon which an ethical product is built. If you, as a company, think about risks and issues from the start, it is highly likely that you will be easily compliant with any laws, since they deal with exactly that - risks. If, on top of that, you go beyond the mere compliance and create a product that not only fulfills the basic requirements, but enhances protection of rights and combines that with a good user experience and added value - that would mean lower compliance costs in the future due to the robust ethical framework; enhanced trust in the product; and future-proofing against evolving regulations. What all this shows is that innovation isn't limited to technology itself—it extends to how we govern and regulate any new and impactful systems like AI. A thoughtful, balanced approach to AI policy will not only protect citizens more effectively but create opportunities for sustainable growth and competitive advantage. For practitioners and policymakers alike, the path forward is clear: ethical considerations must become the foundation of AI development, not an afterthought. As the AI governance landscape evolves, those organizations that embrace proactive ethical frameworks today will not only avoid regulatory headaches tomorrow but will build more trustworthy, valuable products that stand the test of time. The question we must ask ourselves is not whether we can afford to integrate ethics from the start, but whether we can afford not to.
- AI's Role in Law: Drawing Clear Boundaries for Democratic Society
This article deals with the relationship between legal work and artificial intelligence from the perspective of a person who has been working in AI governance and is a legal professional. It aims to explore what impact do generative AI tools specifically have on the legal profession and what part of any judicial work should be done by machines. It focuses on the different aspects of the legal work that GenAI could influence and asks the question what limitations should be imposed on AI in law. I argue that law should remain an exclusively human domain when it comes to core legal functions - drafting, enacting, interpretation, and judgment. While legal professionals may leverage AI tools, their use must be carefully bounded. Technical Limitations of AI in Legal Understanding LLMs fundamentally don't understand the context and the underlying social, political, and economic reasons for laws, judgments, and precedents. Laws and regulations are created based on political platforms - minimum wages, employee protection, and data privacy are examples of "social state" politics that are based on specific values. While LLMs are trained on vast amounts of data, they cannot comprehend how policies are created and formed in people. Humans tend to have the ability to make connections between seemingly unrelated topics in a way machines cannot do that. AI relies mostly on a statistical analysis of the data, whereas humans draw upon complex contextual and "natural" understanding of how society functions to make their decisions or recommendations. This human ability is particularly relevant in law, because it involves detailed understanding of language, politics, philosophy, ethics and many other spheres of life. Good lawyers have a reasonable chance of getting the consequences of complex situations right, while AI is nowhere near capable enough to be trusted with such judgements. My own experience with the major GenAI tools shows me that they are like a rookie paralegal that knows everything, but cannot make any connections between their knowledge and real-world issues. And that is a limitation that does not look like it's going to be solved any time soon. Verification of AI's legal reasoning presents another significant challenge. Legal questions may have different answers in the same jurisdiction depending on small changes in facts, let alone across multiple jurisdictions. Recent cases demonstrate these limitations - for instance, ChatGPT's poor performance in straightforward trademark matters shows that even basic legal reasoning remains beyond AI's current capabilities In a court case in the Netherlands, it showed that it is inconsistent in its reasoning and it could not interpret law in a way that is in any way good enough for a real-life situation before a judge. Source: https://www.scottishlegal.com/articles/dr-barry-scannell-chatgpt-lawyer-falls-short-of-success-in-trademark-case Democratic and Accountability Framework The transparency and accountability implications of AI in law are profound. While humans make mistakes in creating laws, they can be held accountable both politically and legally. Democratic society enables discussions on important topics that expose the motives behind politicians' decisions. As AI systems become more complex, their decision-making becomes increasingly opaque. Since machines cannot (yet) be held accountable in the same way humans can, delegating important decision-making to artificial intelligence would undermine democratic principles. Furthermore, considering how a few enormous companies have taken control over AI technology, allowing AI into core legal functions would effectively place our legal system in their hands - a shift that threatens democratic principles and risks creating an authoritarian framework. Especially now seeing how X and Meta are approaching the issue of "freedom of speech", it becomes abundantly clear that decisions critical for society are not made via representative of the people, they are made by private actors with no regard for democracy or rule of law. The Human Nature of Law Law, as part of the humanities, is fundamentally about understanding the human condition - something only biological beings can truly comprehend. Legal professionals develop crucial skills and insights through their research and practice that no AI system can replicate. Just as calculator dependence has diminished arithmetic skills, over-reliance on AI in legal work would be detrimental to future generations of lawyers, but with far more serious consequences for society. Legal work is not only like the movies, where a brilliant lawyer argues a case spectacularly in front of judge. The majority of the legal work is done outside of courtrooms and it's important to understand it's crucial part of a democratic society. Outsourcing critical legal decisions to a non-human entity means that we would be outsourcing one of the most critical pillars of democracy - namely people's rights and freedoms - to machines that, as we have established already, have no real understanding of the world. They don't have the concept of fairness, equity and equality coded into them. Furthermore, leaving important decisions to AI systems means that we basically are handling over the judicial systems to the companies creating those systems, creating a huge hole in our democratic oversight. AI systems may produce biased or unclear answer, due to their overreliance on training data and the (very) possible lack of explainability of the decision-making. Defining Clear Boundaries for AI in Legal Work The role of AI in legal work should follow a clear hierarchical structure: Pure Administrative Tasks (Full AI Utilization Permitted): Document filing and organization Basic text editing and formatting Calendar management Simple template filling Repository maintenance Hybrid Tasks (AI as Transparent Assistant): Legal research combining search and preliminary analysis Initial contract review for standard terms Template creation for complex legal documents Case law database management For these tasks, AI must provide clear documentation of its methodology, sources, and reasoning. Legal professionals retain oversight and decision-making authority, with AI serving as a tool for initial analysis rather than final decisions. Pure Legal Work (Exclusively Human Domain): Legal strategy development Final interpretation of laws and precedents Judicial decision-making Complex negotiation Legislative drafting These boundaries would help legal professionals to take advantage of the new technologies and use them to become better at their work. On the other hand, vital democratic and judicial principles will still be upheld and accountability of decision-making and algorithmic bias will be dealt with. And on the topic of bias - while both humans and AI systems exhibit biases, human biases can be addressed through training, conscious effort, and professional ethics frameworks. Human decision-making processes can be scrutinized and challenged through established legal and professional channels - something not possible with AI systems. Moreover, automation bias could lead lawyers to overly rely on AI suggestions, potentially missing novel legal arguments or interpretations that might better serve justice. Conclusion The debate around AI in legal systems isn't just about technological capabilities - it's about the future of democratic governance itself. The limitations of AI in legal contexts extend far beyond technical constraints to touch the very foundations of how we create and maintain just societies. If we allow AI to penetrate core legal functions, we risk creating a dangerous precedent where complex human decisions are delegated to unaccountable systems. Instead of asking how we can integrate AI into legal decision-making, we should be asking how we can leverage AI to support and enhance human legal expertise while maintaining clear boundaries. This means developing explicit frameworks that define where AI assistance ends and human judgment must prevail. The legal profession has an opportunity - and responsibility - to lead by example in showing how emerging technologies can be embraced thoughtfully without compromising the essentially human nature of professional judgment. The future of law doesn't lie in AI replacement, but in human professionals who understand both the potential and limitations of AI tools - and who have the wisdom to keep them in their proper place.
- Social Scoring 2
In an age where artificial intelligence and data-driven governance shape our daily lives, social scoring systems have emerged as one of the most debated innovations. These systems use algorithms to assign individuals scores based on their behavior, aiming to incentivize actions that align with societal goals such as trustworthiness and cooperation. While proponents highlight their potential to foster community trust and economic development, critics caution against their capacity to erode privacy, amplify inequalities, and institutionalize surveillance. Social scoring systems rely on vast amounts of behavioral data, which are processed by algorithms to generate scores. These scores, often used to assess trustworthiness, influence access to resources, opportunities, and social standing. Experiments have shown that such systems can boost trust and wealth generation within communities, particularly when their mechanisms are transparent. However, transparency has its downsides. Individuals with lower scores frequently face ostracization and reduced opportunities, exposing the discriminatory potential of these systems. Transparency, while fostering procedural fairness, paradoxically enables reputational discrimination by giving others the tools to rationalize punitive actions against low-scoring individuals. The ethical dilemmas of social scoring extend far beyond transparency. These systems operate in a delicate space where privacy and agency are often compromised. Their reliance on extensive data collection creates a thin line between governance and surveillance. In jurisdictions with weak regulatory frameworks, this surveillance risks becoming a normalized aspect of everyday life, leading to diminished public trust and stifled dissent. Moreover, the opaque decision-making processes in many social scoring implementations exacerbate the problem. When individuals cannot contest or understand how their scores are calculated, the lack of accountability undermines fairness and reinforces systemic biases, particularly against marginalized groups. Efforts to regulate such systems are underway, most notably with the European Union’s AI Act. This legislative framework explicitly prohibits social scoring practices that result in unjustified disparities, classifying them as posing “unacceptable risks.” The Act introduces the concept of “contextual integrity,” requiring scores to be used strictly within the domains for which they were generated. However, while these measures are a step in the right direction, enforcing them remains a significant challenge. Operational gaps persist, leaving room for exploitation and misuse, especially when private entities employ these systems for purposes beyond their original intent. The paradox of transparency in social scoring systems lies in its ability to simultaneously foster and harm. On one hand, transparency increases trust within communities and makes systems appear more legitimate. On the other hand, it also sharpens the social stratification inherent in these mechanisms. Individuals with higher scores are systematically trusted more, while those with lower scores endure social and economic penalties. This dynamic risks perpetuating inequities rather than addressing them. The societal implications of social scoring extend beyond individual experiences. Communities subjected to transparent scoring systems often show reduced inequality and increased collective wealth. However, these aggregate benefits do not erase the harm experienced by low-scoring individuals, who often find themselves locked out of opportunities. The broader societal stratification created by these systems mirrors historical patterns of exclusion, now amplified by the precision of algorithmic governance. As these systems become more prevalent, their regulation must evolve to ensure fairness and inclusivity. While the EU AI Act provides a solid foundation, global frameworks are essential to address disparities in regulation and enforcement. Moreover, the design of social scoring systems must prioritize protecting the rights of vulnerable populations, ensuring that the potential benefits do not come at the cost of individual dignity. The development and implementation of social scoring systems bring us to a critical crossroads. The question is not merely whether we can build such systems but whether we should—and under what conditions. Striking the right balance between innovation and ethics will determine whether social scoring systems become tools for societal empowerment or mechanisms of control. As we shape these technologies, we must ensure they align with the principles of fairness, accountability, and human rights, keeping the well-being of all individuals at their core.
Other Pages (1)
- About | EthicsandlawinAI
Home Read More Our Team. We are Petko Getov and Yordan Vasilev. We are AI ethics and governance professionals. Yordan Vasilev Tech Legal Professional Petko Getov Tech and AI Lawyer