WELCOME
Your guide to the ethical, legal and regulatory implications of AI
Search Results
3 items found for ""
- Data minimization vs. bias prevention
The EU AI Act and the GDPR have a few intersections that would be fascinating to follow, none more than Art. 10 of the AI Act, which deals with data governance, specifically its paragraph 5. This provision allows processing of special categories of personal data under strict conditions in order to ensure that the AI systems are trained on diverse datasets, that include for example people with various backgrounds. The purpose is to avoid discriminative practices stemming from the usage of AI systems that are trained on lopsided datasets. What is particularly interesting for me here is the "clash" between two (at least in my eyes) equally important rights of persons - one is the right to (data) privacy and the other is the right to equal treatment. Both of those rights are strongly connected to human dignity, so in some cases, I believe the courts and lawyers would have a difficult time making a fair decision as to which of these right prevails. For example, I wouldn't want my health data to be used for training purposes by a random system provider B. At the same time, if I have a chronic condition A, which is very rare, and my hospital uses the AI system of service provider B to establish emergency access protocols, so that the AI system helps with the decision making in terms of who gets treatment first, the fact that my health data was not part of the training may become life-threatening in some circumstances - I could be denied priority access based on the fact that the condition is not present in the dataset. On the other side, if there is a cybersecurity breach in the servers of service provider B and the data about my conditions gets leaked, it may fall into the hands of specific actors that may target people who have the same condition as me based on prejudice or whatever other reason, which might also endanger my well-being in a significant manner. Therefore, enterprises who fall under Art. 10 (meaning providers of high-risk AI systems) would need to (1) establish very strict security and cybersecurity measures for sensitive personal data, (2) whenever possible, not use personal data at all, or use measures to anonymize/pseudonimize the data, so that in my example above the condition itself is present in the data set, but my name is not attached to it, (3) make sure they have other measures built in their systems, which allow for some degree of human control and decision-making power in critical situations and (4) not transfer or give access to personal data to anyone outside of their organization. Data governance should, similarly to the whole AI governance, include various stakeholders from the respective organizations, because in such complex situations, having different points of view is essential. Technical, business, compliance, legal people need to be involved in all important decision-making in regard of the AI system throughout its lifecycle, otherwise the risks for the companies could be enormous. Fair and trustworthy AI systems, which do not force us to choose the lesser of two "evils" will only be possible through thoughtful, responsible, ethical and consistent governance practices. Otherwise the risk both for the people and the companies would be too big.
- Regulation will NOT stop AI innovation
In the last few months, the narrative that regulation of AI, and specifically the EU AI Act (AIA) is detrimental to innovation in the EU, has gained a lot of traction both on social and traditional media. Many people claim the AIA stops EU in its tracks of AI development, that companies will decide to do business elsewhere, because of the more lax regulatory landscape, especially in the US and China. Reasons why this is speculative and exaggerated: 1. AIA is not that severe, will apply to a very narrow set of cases, for specific high risk areas, most SMEs and even many large businesses will be impacted in a very limited way. 2. Even the strict obligations are based on good and sound business practices. The AIA (1) is aimed at large corporations abusing their power, (2) aims at protecting the rights of citizens and will be interpreted in that light, meaning that if a company has robust security practices, good internal governance and quality products, it has no reason to be afraid of the regulation, and (3) establishes in the form of statutory obligations standard business practices, most of which companies are anyway either implementing or having in place already. 3. The AIA is not perfect, but that does not mean it is bad. We aim at perfection in everything we do as society, but should never be trapped in a perfectionist circle - even the best products malfunction and make errors, this is normal. Lack of perfection has not stopped any business before, it should not stop regulation as well. Also, guidelines and codes of practice are being drafted which will clarify compliance and ensure high level of legal clarity. 4. There are many exceptions to the rules, and both companies and regulators have enough time to adjust to the regulation. There is no willingness from the EU to stop the business, my opinion is that the intent is to stop the Big Tech players to do whatever they want and protect the citizens from market abuse. There are a lot of companies, which are very profitable, in full compliance and deliver high quality products to the market, and are not negatively affected by any EU regulations. Also, an example for an overstated effect on the business is the GDPR: its negative effects on the business are greatly exaggerated, its positive effects - undermined. Again, if a company has efficient and robust internal processes, compliance is not that difficult to achieve. 5. Regulation exists and is being implemented both in the US and in China. In fact, legislation is mostly on state level in the US, which is going to impact the regulatory landscape there more than the EU, because companies may need to abide by different rules in different states. This is often the case for data privacy regulation as well. China, on the other hand, still falls short from the EU in terms of social justice, equality, general standard of living and many other similar metrics, so any business analysis from there must take that into consideration.
- GenAI
Yet another reminder that there is a lot of research showing the many problems especially around the current GenAI wave and proposing a smart way forward: let's focus on technology that is actually meaningful for people and that creates high-quality products, rather than "stochastic parrots" that exacerbate already significant issues. https://www.linkedin.com/feed/update/urn:li:activity:7256756618762010624/?lipi=urn%3Ali%3Apage%3Ad_flagship3_detail_base%3BKZ8dztGETTi%2BLU9FjB1LZQ%3D%3D