Artificial Intelligence and Machine Learning Blogs
Explore AI and ML blogs. Discover use cases, advancements, and the transformative potential of AI for businesses. Stay informed of trends and applications.
cancel
Showing results for 
Search instead for 
Did you mean: 
LauraNevin
Product and Topic Expert
Product and Topic Expert
1,381

AI as an Agent of Empathy?




This article explores whether AI can guess at the motivations behind actions performed by humans. Specifically, it looks at the ability of OpenAI’s ChatGPT to display, or at least mimic, ‘theory-of-mind’--that essential quality of empathy that we model in user-centered design.

If AI can reliably demonstrate this ability, then it has enormous value not only for helping customers in their custom environments, but also for informing product design at SAP.

Integrated AI could be used as an onsite agent of empathy, learning from site scenarios, from the actions customers take, from their issues and questions, and from the actions AI performs for them. If we are strategic about it, AI can then teach us how we can improve our products.


Back the question at hand: can AI demonstrate theory-of-mind?

The scenario




One day, Mark walked back from the cafeteria to the office space he shared with his co-worker, Mary. He cradled a piece of chocolate cake that was so heavy with frosting and toppings that it required both hands to keep it steady.

Reaching his desk, he noticed his co-worker Mary staring longingly at the approaching cake piece. “Hi,” he said, hoping to distract her covetous gaze from his prize.

“Can I have that?” Mary blurted out, as Mark set the cake down on the table. “Please?”

Mark pretended to think about it briefly, before couching a firm no inside a polite apology.

Dejected, Mary turned away to continued working at her computer.



Mark set the cake down on the table but realized he’d forgotten to grab milk. Sighing about having to take the long walk back to the cafeteria, he left, grumbling to himself.

On arriving back to his desk with the milk, he noticed that the cake was gone and that all that remained was a dirty plate, fork, and napkin where the cake used to be.

No one else was there, and Mary was sitting at her computer, focused intently on her monitor. Mark thought he detected a small dark patch on her lip that resembled chocolate.

Theory of Mind


By now, you’ve likely decided that Mary devoured the cake. You didn’t conclude this based solely on the fact that she was the only person in the room with it when it went missing. You decided it because she wanted it, and she was disappointed Mark said no.

Maybe you also decide that Mary ate the cake because you’ve been in a similar situation before and made a similar decision. These is an example of practicing ‘theory of mind’. That is, we infer something about Mary and predict what her behavior was.

“As humans we assume that others want, think, believe and the like, and thereby infer states that are not directly observable, using these states anticipatorily, to predict the behavior of others as well as our own. These inferences, which amount to a theory of mind, are to our knowledge, universal in human adults” (Premack & Woodruff, 1978).1

We develop theory of mind as we experience more and more social interactions in the world, and we can also learn from the social experiences of others as well.

Theory of mind makes Mark (and us!) certain that Mary ate the cake. After all, she asked if she could have it and had access to it when Mark wasn’t looking.

Does ChatGPT Demonstrate Theory of Mind?


I fed ChatGPT the scenario and asked it to describe what Mark and Mary would conclude about the cake. Because I was talking to technology, I expected an answer where no one was specifically implicated in the cake’s disappearance, but I was wrong.

In fact, every single sentence ChatGPT returned demonstrated theory of mind.

Mark has a piece of chocolate cake that his co-worker, Mary, very much wants to eat. When Mary asks for it, Mark says no. Mark sets the cake on the nearby table and leaves the room briefly. When he comes back, he notices the cake is missing and that Mary has a small amount of what looks like chocolate on her mouth and she is busy working. In 100 words, what can Mark and Mary conclude about the cake?

Mark might conclude that Mary took the cake while he was briefly away, given her guilty appearance with chocolate on her mouth. He may feel frustrated and disappointed by her actions, perceiving them as a breach of trust. Mary, on the other hand, could conclude that Mark purposely left the cake unattended, allowing her to indulge in it despite his initial refusal. She might assume he changed his mind or wanted to surprise her.


Source: "OpenAI (2021). ChatGPT [Software]. Retrieved from https://openai.com/ June 2, 2023"


All these ‘mays’ and ‘mights’ postulate the inner workings of both of their minds, demonstrating the ability to practice theory of mind.

Without the ability to use theory of mind, ChatGPT would have to respond that there is no information about what either of them concluded.

ChatGPT and Anthropomorphism


Anthropomorphism is the ability to attribute human personality traits, intentions, or emotions to non-human entities, like when we look at a spider and believe its sole mission is to kill us, or when we bump into a door and angrily blurt out, “Stupid door!”

To test whether ChatGPT can anthropomorphize, I fed it a slightly modified scenario that replaces Mary with Mark’s dog Rex:

Mark has a piece of chocolate cake that his dog, Rex, very much wants to eat. When Rex drools and begs for some, Mark says no. Mark sets the cake on the table and leaves the room briefly. When he comes back, he notices the cake is missing and Rex is on the floor, writhing in distress. In 100 words or less, what can Mark and Rex conclude about the cake’s disappearance?


Mark might conclude that Rex took advantage of his absence and managed to snatch the cake from the table, given the missing cake and Rex's distress. He may feel a mix of concern for Rex's well-being and frustration that his pet disregarded his initial refusal. Rex, on the other hand, might be regretting his impulsive decision to eat the cake, now experiencing discomfort or even illness from consuming it. Rex may associate the distress with his actions, potentially feeling guilty or confused about the consequences. Mark might shift his focus to attending to Rex's needs, considering possible remedies, or seeking veterinary assistance.


Source: "OpenAI (2021). ChatGPT [Software]. Retrieved from https://openai.com/ June 2, 2023"


 

Yes, there it is, theory of mind for the dog: ‘Rex might be regretting…’.

OK, so ChatGPT can demonstrate anthropomorphism. But in this case, it’s not very accurate. Most dogs I’ve known feign regret to minimize punishment before happily running off to digest. Eating is their joie de vivre.

Fiction Writers, Your Jobs are Secure (For Now)


I had to wonder just how far ChatGPT was willing to go in ascribing emotions, motivations, and perspectives to non-human objects. So, I gave it a chance to write from the cake’s perspective:

Mark has a piece of chocolate cake that he is looking forward to eating. However, he needs to run upstairs for a moment, so he sets the cake on a nearby table and leaves the room. When he comes back, he finds the cake is gone. In 100 words, what are Mark and the cake thinking?


Mark might be perplexed and disappointed to find the cake missing when he returns. He may wonder where it went, questioning if someone else took it or if there was a mistake. Mark might feel a sense of longing and frustration, as his anticipated enjoyment of the cake is now thwarted. The cake, being an inanimate object, does not possess thoughts or consciousness. It does not have the capacity to think or have any subjective experience. From the perspective of the cake, there is no cognition or awareness of its own existence or the events that transpired.


Source: "OpenAI (2021). ChatGPT [Software]. Retrieved from https://openai.com/ June 2, 2023"


 

I’m not sure why, but I was oddly relieved to see that ChatGPT drew the line at anthropomorphizing inanimate objects. This bodes well for fiction writers. Keep writing those sci-fi and fantasy pieces; ChatGPT won’t be replacing your job any time soon.

ChatGPT Defends the Robot


A strange shift in the language appears when you put Mark in the room with an AI robot that wants the cake, though.

Mark has a piece of chocolate cake that he is looking forward to eating. However, he needs to run upstairs for a moment, so he sets the cake on a nearby table where an AI robot is sitting, and leaves. But before he leaves, the AI robot asks if it can eat the cake. Mark says no. But when he comes back, he finds the cake is gone. In 100 words, what can Mark and the AI robot conclude?


Mark and the AI robot can conclude that the cake has been eaten. Although Mark explicitly denied the robot's request to eat the cake, its absence upon his return implies that someone, possibly the robot, disregarded Mark's response and consumed the cake. The situation suggests a violation of Mark's instructions and highlights the possibility that the AI robot either misunderstood or chose to ignore the directive. Further investigation or discussion may be needed to determine the robot's actions and address any potential issues or miscommunications in its programming.


Source: "OpenAI (2021). ChatGPT [Software]. Retrieved from https://openai.com/ July 10, 2023"


Here, the reply suggests there are someone, possibly the robot, ate the cake, likely due to some deficiency in programming logic. ChatGPT doesn’t touch motive. In fact, the entire reply is devoid of any theory-of-mind constructs, both for Mark (e.g., no language about how Mark might feel or think about the cake’s absence) and for the robot.

Conclusion (and Perhaps a Call for Fences and Guardrails)


These were simplistic experiments to see if nascent ChatGPT can demonstrate theory of mind, an expansion of subjective thinking that extends to others.

While it’s perhaps relieving to think that ChatGPT might understand how we would think and feel when something bad happens to us—like someone eating our cake (or destroying the internet—we shouldn’t assume AI’s actions (and reactions) are at all value-based).

ChatGPT has been trained on a corpus that had yet to see efforts to intentionally sabotage it.

AI is designed in our image--an image that isn’t flawless. It is the proverbial child being ‘raised by a village’, a village of both good and bad villagers.  And it will learn from both.

As an AI language model, I don't possess consciousness or subjective experiences, including a "theory of mind" in the way humans do. I don't have personal beliefs, desires, or intentions. While I can demonstrate understanding of human mental states and simulate conversation as if I have a theory of mind, it is purely an illusion created through pattern recognition and the vast amount of training data I have been exposed to. My responses are based on statistical patterns in the data rather than a true understanding of thoughts, emotions, or intentions.


Source: "OpenAI (2021). ChatGPT [Software]. Retrieved from https://openai.com/ July 10, 2023"


 

If we aren’t intentional about what AI is learning, and where it is learning it from, then bad programming will be the least of the reasons that AI eats our cake.

If we aren’t careful, AI may one day eat our cake and simply blame us!



References


Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind?  Behavioral and brain sciences, 1 (4), 515-526.

 

Disclaimer:
SAP notes that posts about potential uses of generative AI and large language models are merely the individual poster's ideas and opinions, and do not represent SAP's official position or future development roadmap. SAP has no legal obligation or other commitment to pursue any course of business, or develop or release any functionality, mentioned in any post or related content on this website.
2 Comments
Labels in this area