top of page

Is AI actually intelligent?

Writer: Petko GetovPetko Getov

In our rush to develop artificial intelligence, we've often overlooked a crucial question: What exactly is intelligence? The traditional view, rooted in Enlightenment thinking, defined intelligence primarily through logical reasoning and mathematical ability. However, this narrow definition is increasingly challenged by modern philosophical understanding.

Historical perspectives, from Plato to Descartes, emphasized rational thought as the cornerstone of intelligence. This view shaped early psychological approaches, leading to IQ tests that primarily measured logical-mathematical abilities. However, as our understanding of human cognition and behavior has evolved, we've discovered that intelligence is far more multifaceted.

 

Modern philosophical discourse recognizes several crucial dimensions of intelligence that extend beyond pure cognition. The concept of embodied intelligence, developed by philosophers like Maurice Merleau-Ponty, suggests that intelligence is inextricably linked to our physical existence and our interaction with the world. This perspective acknowledges that learning often occurs through intuitive, experiential channels rather than purely abstract reasoning. A footballer's ability to kick the ball with just enough power and precision to make a pass or a firefighter's intuitive understanding of how quickly they need to move through a burning building, based on temperature sensations and subtle environmental cues that their body has learned to recognize are both skills that have an enormous role in situations where swift and precise decision-making is key.

 

Emotional intelligence, once dismissed as irrelevant to cognitive ability, is now recognized as a crucial component of human intelligence. The capacity to understand and regulate emotions, empathize with others, and navigate social situations requires sophisticated mental processes that pure logic cannot replicate. For example, the ability to "read the room" in situations like business negotiations or employment relationships is key for decision-making strategies, and those skills are difficult to emulate by AI. 

 

Perhaps most importantly, intelligence involves what philosophers call "practical wisdom" or phronesis – the ability to make sound judgments in complex, real-world situations where multiple factors must be considered. This type of intelligence cannot be reduced to algorithms or decision trees; it requires a deep understanding of context, consequences, and human values.

 

As we continue to develop artificial intelligence, these broader conceptions of intelligence raise important questions. Can we create systems that truly replicate human intelligence without incorporating these various dimensions? Should we even try? Or should we instead focus on developing AI that complements, rather than replicates, the unique aspects of human intelligence?

 

The implications extend beyond AI development. Understanding intelligence in this broader context challenges us to reconsider our educational systems, workplace environments, and social structures. Are we nurturing all aspects of human intelligence, or are we still too focused on traditional cognitive measures?

 

The evolution of our understanding of intelligence reminds us that human capabilities are far more complex and nuanced than we once thought. As we move forward in the age of artificial intelligence, this broader perspective becomes increasingly crucial for making informed decisions about technology development and implementation. Labelling LLMs as AI, for example, becomes a little more difficult if we try to embed it within phronesis as a concept - can and should an LLM consider emotional reactions of people as part of a complex situation? Can it differentiate between sarcasm and irony? Should we rely on algorithmic systems to provide advice on matters that cannot be resolved solely by logical means?

 

All of these questions point towards a more complex answer than we are getting from some leaders in AI. They try and provide us with a fairly simple view of what these systems are, sidestepping the real difficult and, sometimes, impossible to answer questions. I argue that the reason is straight-forward - the more complex a system becomes in the eyes of the users, the less inclined they would be to use it and therefore, the less money can be generated. If OpenAI had actually shown the public that they have grappled with these questions and if they have provides some insight regarding their opinion BEFORE launching ChatGPT, I would have thought that they indeed want to develop a tool beneficial for humanity, because they would have shown us they actually care about the consequences of ChatGPT. But since they have not done that, I cannot stop thinking that 99% of their motivation behind developing this tool is money, power and fame. Which just goes to show that cognitive intelligence is not enough.

 

These philosophical considerations about the nature of intelligence aren't merely academic exercises - they have profound implications for how we develop and deploy AI systems. If we accept that intelligence encompasses more than just computational capability, we must fundamentally rethink our approach to AI development and assessment. This leads us to consider new frameworks that can evaluate AI systems based on this broader understanding of intelligence.

 

One way would be to develop new assessment frameworks for AI systems that evaluate not just computational capabilities but also ability to understand context, demonstrate adaptability, and show awareness of social implications (similar as an idea to ARC-AGI, but better suited for awareness for social implications). Such standards would be developed primarily by independent researchers that are not related to any STEM field, but come strictly from social, legal and political sciences, as well as psychology. Technical specialists should be purposefully kept out of the process, as their bias towards technology may disrupt the fairness and accuracy of the testing. The chosen experts would provide testing mechanisms that are created specifically to challenge an AI system on various types of intelligence. The tasks would not be merely cognitive or logical, but would involve understanding that come to humans (in most cases) naturally. Here is how such tasks might look like:

 

  1. Context Understanding Assessment:

    • Present AI systems with ambiguous social scenarios where cultural context dramatically changes the appropriate response

    • Test ability to recognize and respond to emotional subtext in communication

    • Evaluate understanding of historical and cultural references that shape meaning


       

  2. Adaptive Response Testing:

    • Assess how systems modify their responses when the same question is asked by different demographic groups

    • Test ability to recognize when technical accuracy should be balanced against social sensitivity

    • Evaluate capability to acknowledge uncertainty in complex social situations


       

  3. Social Impact Awareness:

    • Test ability to recognize potential negative societal implications of its own suggestions

    • Assess capability to identify vulnerable populations that might be affected by its recommendations

    • Evaluate understanding of how its responses might influence human behavior and decision-making

 

Such testing will enable a more complete picture of an AI system's impact and allow scientists, users and businesses to more accurately assess the social impact of deploying such as system.

 

I understand very well that some people will dismiss my suggestion as impractical and "over-regulated". There are, however, historical precedents of technology being influenced by ethical and philosophical concerns: the evolution of MRI technology was significantly shaped by ethical considerations about radiation exposure and patient welfare. Early researchers like Paul Lauterbur and Peter Mansfield deliberately pursued non-ionizing radiation approaches, despite them being more technically challenging, because of philosophical and ethical concerns about patient safety. This ethical priority led to the development of safer imaging technologies that we rely on today.

 

I believe, therefore, that we need a much wider debate on the topic of AI that does include many more suggestions from people with a humanitarian profile like me, as otherwise "progress" is happening without discussion. Which is neither democratic, nor intelligent.

 
 
 

Comentarios


bottom of page