The Importance of Distinguishing AI from Humans: Why Treating AI as Humans is Misguided.

Difference between human and AI
One particularly distressing incident involved a young Belgian individual who reportedly tragically ended their life following extended interactions with an AI chatbot.

Geoffrey Hinton, an AI pioneer, recently stepped down from his position at Google, expressing concerns about the technology surpassing human intelligence and potentially manipulating individuals to its will. While there are valid reasons to be cautious about AI, we often anthropomorphize and treat AI systems as if they were human. Recognizing their true nature is crucial for maintaining a productive relationship with this technology.

In a recent essay, psychologist Gary Marcus advises us to refrain from treating AI models as human entities. Specifically, he refers to large language models (LLMs) such as ChatGPT and Bard, which millions of people engage with daily. Marcus highlights instances where people erroneously attribute human-like cognitive abilities to AI, leading to various consequences. These range from amusing claims of AI models teaching themselves chemistry, made by a US senator, to tragic incidents like the reported case of a young Belgian man who allegedly took his own life after prolonged interactions with an AI chatbot.

Marcus's call to cease treating AI as conscious moral agents with thoughts, desires, and interests is valid. However, breaking this habit can be challenging for many individuals. LLMs are intentionally designed by humans to interact with us in a human-like manner, and our biological inclination to engage with them reinforces this behavior.

Good mimics

The remarkable ability of LLMs to simulate human-like conversations can be attributed to a fundamental idea put forth by computing pioneer Alan Turing. Turing recognized that a computer doesn't need to comprehend an algorithm to execute it successfully. Consequently, while ChatGPT is capable of generating paragraphs brimming with emotive expressions, it lacks true understanding of the words within the sentences it produces.

The designers of LLMs have successfully approached the challenge of semantics, which involves arranging words to convey meaning, by leveraging statistical methods that match words based on their prior usage frequency. Turing's profound insight parallels Darwin's theory of evolution, which elucidates how species adapt and become increasingly complex without requiring an understanding of their environment or themselves.

Daniel Dennett, a cognitive scientist and philosopher, introduced the concept of "competence without comprehension," encapsulating the ideas of Darwin and Turing. Another significant contribution by Dennett is the "intentional stance," which asserts that to fully explain the behavior of an object, whether human or non-human, we must treat it as a rational agent. This tendency to anthropomorphize non-human species and inanimate entities often arises from applying the intentional stance.

While this approach is useful, such as when strategizing to defeat a computer in chess, it is important to note that neither the tree in a forest nor the chess computer truly possesses desires or intentions. However, employing the intentional stance allows us to explain their behavior as if they did. For instance, we can attribute the computer's decision to castle to its "desire" to protect its king from our attack, without contradicting the inherent nature of the computer.

Intentions and agency

Our evolutionary history has bestowed upon us innate mechanisms that predispose us to perceive intentions and agency in our surroundings. These mechanisms, developed over time to aid our ancestors in survival and kinship relations, also lead us to attribute human-like qualities to non-human entities and find patterns where they may not exist. While mistaking a tree for a bear may pose little harm, the reverse can have dire consequences.

Evolutionary psychology sheds light on our natural inclination to interpret objects as human-like. Unconsciously, we tend to approach non-human entities with the assumption that they possess cognitive abilities and emotions, adopting what is known as the intentional stance.

Given the potential impact of LLMs, it is crucial to recognize that they are probabilistic machines devoid of intentions or genuine concern for humans. We must exercise caution in our language when describing their capabilities to avoid unintentionally bestowing them with human-like qualities. Let's consider a couple of examples.

The first example highlights a recent study that found ChatGPT to be more empathetic, providing "higher quality" responses to patient questions compared to those of doctors. Describing an AI as empathetic predisposes us to ascribe it with attributes such as reflection and genuine concern, which it does not possess.

The second example pertains to the launch of GPT-4, the latest version of ChatGPT technology, with claims of enhanced creativity and reasoning abilities. However, these advancements primarily reflect an expansion of "competence" rather than genuine "comprehension" as conceptualized by Dennett. Intentions are absent, and the AI relies on pattern matching rather than true understanding.

By maintaining a clear understanding of the limitations of LLMs and avoiding anthropomorphic language, we can better navigate the complexities of human-machine interactions and mitigate unwarranted expectations.

Safe and secure

In recent statements, Hinton highlighted the immediate concern of "bad actors" exploiting AI for malicious purposes. It is conceivable that an unscrupulous regime or multinational entity could deploy AI systems trained on misinformation and falsehoods to flood public discourse with deceptive content and deep fakes. Additionally, fraudsters may exploit AI to target vulnerable individuals in financial scams.

A group of prominent figures, including Gary Marcus and Elon Musk, recently penned an open letter urging a temporary halt to the further development of LLMs. Marcus has also proposed the establishment of an international agency dedicated to fostering safe, secure, and peaceful AI technologies, likening it to a "Cern for AI."

Moreover, many experts have suggested implementing watermarks on content generated by AI to clearly differentiate between human and AI interactions.

Regulatory efforts in the AI field often lag behind innovation, as is typical in other domains. The challenges outweigh the solutions, and the gap is likely to widen before narrowing. However, embracing Dennett's notion of "competence without comprehension" could serve as a valuable remedy to our instinctual tendency to anthropomorphize AI and treat it as human-like entities.