top of page

Follow up to Petko Getov's Bias article.

Writer: Yordan VasilevYordan Vasilev

In one of his articles Petko Getov provides an overview of what is Ethics in AI.

Linked here:  

To follow up on his point I found this interesting example of how training data sets can solidify biases that are very difficult to overcome.


The examples I will use are two – watches and people writing.


As you can see below – these are AI generated watches – all of them are showing 10:10, but all the prompts used have requested 12:02 to be displayed. Why is that? Well 99% of the watches in every website on the web shows the watches it sells with this time 10:10 as it is the most presentable and it displays the beauty of the design. And when you have trained your image generator – a very strong connection has been established that if you have a watch is has to display 10:10. How can we sort this problem? To iterate the point made in the article above - Human-Centricity

There must be a human in the loop, there must be prioritization on the impact of the system on the end user and to generate value for the Humans.




Another example is a human writing – always the write with their right hand. Again – most of the images of people writing are of people who are right handed as this is the statistical distribution of humans – most are right handed.



Now these are obvious problems.


But this can be extrapolated to various fields of AI application – credit scoring, social scoring and so on and so on. Bias is part of human nature and it is reflected in the web – the way the web is a mirror of our society. AI is amazing, but it needs to serve us for good!

 
 
 

Comentarios


bottom of page