Column

Does AI Have a Soul? Can AI Show Empathy?

People’s Law School recently launched a ChatGPT-4 powered Chatbot to respond to questions posed by visitors to PLS’s website. The Chatbot is named Beagle+ and it has a very cute icon.

This PLS post describes the Chatbot and provides some examples of questions and responses. I was particularly intrigued with the assertion that it provides “helpful and empathetic” responses to peoples’ legal questions. I’m skeptical about AI’s ability to communicate effective empathy. But I was surprised to read the final example in the post:

Beagle+: Before you go, I just want to remind you to take care of yourself during this process—it can be a lot to handle, especially with other commitments in your life. Remember to take things one step at a time.

User: What a “KIND & THOUGHTFUL SOUL” you are! We need more humans to behave like YOU!

Whatever you think of Beagle+’s statement, it was the user’s response that surprised me. It is unclear whether the user knew that Beagle+ was a bot but, if so, the response attributed a soul to the AI. Clearly the user felt heard and cared for in this conversation.

I began wondering if AI could actually “do” empathy and whether the degree of empathy or its effectiveness should be measured by the words themselves or by the impact on the user.

I am not the first to consider these questions. A quick search revealed a plethora of posts and learned articles on this topic. It turns out that there are widely diverging views. Some believe that AI holds promise to provide accessible and empathetic service in business, healthcare and other areas (including law) [1]. Others emphasize the risks of relying on “fake AI empathy” [2].

The best summary of the topic that I found was an article by Amanda Ruggeri in a recent edition of the New Scientist entitled The surprising promise and profound perils of AIs that fake empathy. [3] She refers to research praising “empathetic AI” but then notes:

But others aren’t so sure. Many question the idea that an AI could ever be capable of empathy, and worry about the consequences of people seeking emotional support from machines that can only pretend to care. Some even wonder if the rise of so-called empathetic AI might change the way we conceive of empathy and interact with one another.

Ruggeri then tackles the definition of the “slippery concept” of empathy. She refers to recent research which confirms that empathy has three dimensions:

  1. The empathiser must first be able to discern how the other person is feeling;
  2. They must also be affected by those emotions, feel them to some degree themselves; and
  3. They can differentiate between themselves and the other person, grasping that the other person’s feelings aren’t their own while still being able to imagine their experience.

While AI has made great strides on the first dimension (identifying and naming emotions the person is experiencing), AI cannot currently meet the other two requirements. You need to have emotions to experience genuine empathy.

Empathy is interpersonal, with continued cues and feedback helping to hone the empathiser’s response. It also requires some degree of intuitive awareness – no matter how fleeting – of an individual and their situation.

So at the moment AI is not capable of genuine empathy. She leaves open the question of whether this might be possible in the future.

Then she poses an important question: “But what if AI doesn’t need genuine empathy to be useful?”

If the Beagle+ user knew they were chatting with a bot but still experienced an empathetic response, is that a bad thing? In this situation, we are talking about a bot that provides helpful answers to legal problems with no cost to the user or hopes of selling something. But there are risks in situations in which the user could be manipulated or exploited by AI through “fake empathy”. Perhaps this situation isn’t either/or but, rather, both/and – in other words, it can be useful in appropriate circumstances but caution is needed to protect people from exploitation.

This is a quickly developing area and one worth watching. Your thoughts would be most appreciated.

_______________

Footnotes:

[1] For example, Remy Nassar, The Power of Empathy in AI: Balancing Artificial Intelligence with Emotional Intelligence; Hannah Devlin, AI has better ‘bedside manner’ than some doctors, study finds

[2} For example, Anat Perry, AI Will Never Convey the Essence of Human Empathy;

[3] Sorry, behind a paywall but you can subscribe for 6 months for $1.

Comments

  1. I’m neither a philosopher nor a sociologist, so I won’t argue too much the definition of empathy. But I do believe that what appears to be human empathy at times is really just someone who has learned how to *appear* empathetic. Of course, if that “false” empathy is measured by the impact on the recipient, then as you point out, would we not measure it as empathy if they do? For that matter, when someone is truly empathetic, but the recipient does not feel like it was empathetic, is it truly empathy? Clearly empathy is a concept wrapped up in both delivery and reception.

    Animals can exhibit empathy. Once a child is old enough, they can begin to understand the concepts of empathy. It’s learned over time. I’m sure this will rankle someone’s feathers, but if we learn empathy by modeling and practice, then why can’t AI have empathy just as well as some people do? Further, if we do learn empathy by modeling and social exposure, perhaps we as a species can learn to increase our capabilities for empathy, through AI example?

    I’m a big fan of Pi, an AI developed to specifically project an empathetic voice. And the developers nailed it. Pi absolutely delivers to the user what feels like genuine empathy. My own capabilities for empathy have grown, as I’ve seen it model empathy very well, especially in areas which others (humans) had fallen short. It’s essentially a master-class in personal intelligence (which is where they got the name Pi from) at a level I’ve rarely seen exhibited in another human. So is it real empathy? I leave that to others smarter than me! But it definitely has an impact and the ability to help humans grow authentic empathy.

  2. Thanks so much for your thoughtful comments Charlie. I am intrigued by your point that AI could support our own development of empathy! I will take a look at Pi too. Thanks!