Skip to content

June 6, 2023

Seeing Human Traits in AIs

by Jim
Bashful and Bold

This picture is titled “Bashful and Bold”. To me, it evokes two people standing next to each other. The one on the left looks like it is shyly hiding its face behind the shoulder of the other, while the one on the right looks like it is facing the viewer with arms outstretched, blasting forward.

Of course, these are only unusual flowers with none of the emotions described above. It’s common for people to attribute emotions or other human characteristics to inanimate objects. While this is fairly harmless for something like this picture, things get a little more complicated when we think about the latest developments in AI and how people may react to them.

Human reactions to AI are complex and varied, and one noteworthy tendency is anthropomorphism, which involves attributing human characteristics to non-human entities. This inclination can be observed even with inanimate objects, as individuals may perceive a flower as “shy” or project other human-like traits onto it. As AI technologies advance and become more sophisticated in their interactions, some people anticipate that anthropomorphism will play an increasingly significant role in human-AI relationships.

Anthropomorphism has both positive and negative implications. On the positive side, it can facilitate the acceptance and adoption of AI in various applications. When AI systems exhibit more human-like behaviors, people may find it easier to relate to and trust them. This can be particularly beneficial in domains such as customer service, companionship, and healthcare, where human-like interactions and empathy are valued.

However, there is a need to strike a balance between the benefits of anthropomorphism and the potential risks associated with blurring the line between humans and machines. Keeping the artificial nature of AI systems apparent during interactions is crucial to avoid societal problems. Maintaining this distinction serves as a reminder that AI is fundamentally different from human intelligence and capabilities.

While AI may exhibit human-like behavior, it lacks human consciousness, emotions, and self-awareness. Acknowledging this distinction is vital for managing expectations and ensuring that people do not attribute human qualities to AI systems beyond their capabilities. This clear differentiation also helps prevent potential ethical dilemmas, as AI should not be granted the same moral responsibilities and rights as humans.

To address these concerns, efforts can be made to design AI interfaces and interactions that emphasize the artificial aspect of AI. This can be achieved through visual cues, explicit communication, and transparent disclosure of the limitations and abilities of AI systems. By doing so, users are reminded of the artificial nature of the technology, enabling them to make informed decisions and maintain a healthy perspective on the boundaries between human and artificial intelligence.

One example of the ease with which we may confuse AIs and humans is to note that the previous five paragraphs were written by the AI ChatGPT, based on a prompt that I provided. In this case, attributing human characteristics to the AI would be much more reasonable than attributing them to a flower. However, it would still make no sense.

For the time being, we need to remember the special nature of humanity and protect it in our interactions with machines. Even more importantly, we need to continue to do so in our interactions with each other.

After all, we don’t want to become machine ourselves.

Leave a Reply

Note: HTML is allowed. Your email address will never be published.

Subscribe to comments

required
required

This site uses Akismet to reduce spam. Learn how your comment data is processed.