Studies indicate that AI has a superior ability to detect and respond to human emotions, and can provide effective emotional support. However, when people learn that AI generates the message, they tend to feel less empathetic, reflecting a bias against empathetic responses from AI. With AI increasingly present in our lives, this research highlights the importance of understanding how to use it in ways that meet human emotional needs.
People want to “feel heard” to feel understood, validated, and valued. Being heard and understood affirms an individual’s reality and perceptions, with critical implications for mental and physical health.
While some individuals may feel heard through discussions with family, friends, or trained counselors, others may not have such access or may not want to discuss difficult issues with others close to them; these individuals may turn to strangers online to feel heard or find this need unmet.
One in four Americans report that they rarely or never feel understood by others. Making someone feel heard requires an investment of time and cognitive resources to accurately understand what is being conveyed and to affirm its value and importance.
Recent developments in AI raise the question of whether AI can help serve what in many ways seems to be a deeply human function: making a person feel heard. Current large language models (LLMs), such as GPT-4, can not only complete tasks in mathematics and coding but also understand people’s mental states.
More intriguingly, recent studies suggest that such models can generate responses that exhibit even higher empathy levels than human experts.
In addressing this possibility, it is crucial to unpack two questions: First, to what extent does AI have the ability to generate responses that make human recipients feel heard? Second, to what extent will human recipients feel heard when they are aware that a response is coming from a non-human, non-conscious entity offering effortless feedback? This second question, in many ways, delves into the essence of what it means to feel heard.
That is, does feeling heard require closing a gap between two human beings, where one person experiences their worldview being understood by an individual with their sentient perspective? Or will a person feel heard when they see their views reiterated and validated, even if it’s done in a way that doesn’t require any “meeting of the minds”?
To examine these questions, researchers at the University of Virginia investigated people’s feelings of being heard and other related perceptions and emotions after receiving a response from either an AI or a human.
They experimented with a 2 (response source: human vs. AI) × 2 (label: human vs. AI) between-subjects design. An initial set of participants described a complex situation they were dealing with and the emotions they felt in the situation.
The label was manipulated by telling participants that the response they received came from a human or Bing Chat. After reading the response, participants rated the degree to which they felt heard.
They also rated how accurately the response captured what they said and the level of understanding demonstrated by the respondent, which are important precursors to feeling heard.
Additionally, they indicated how connected they felt to the respondent, which could be a potential consequence of feeling heard. They also measured participants’ emotions after reading the responses to explore whether AI responses might yield more emotional benefits.
Finally, they sought to understand the differences between AI and human respondents by examining their empathic accuracy and the types of support and techniques they used.
The results were surprising: AI-generated responses elicited more positive reactions from recipients than human-generated responses: they felt more heard, perceived the response as more accurately capturing what they said, felt the respondent understood them more, and felt more connected to the respondent when the response was AI-generated (vs. human).
However, recipients had more positive reactions when they believed the response came from another human (vs. AI); in other words, they devalued the response when it was labeled as AI rather than human-generated.
Finally, the condition that made people feel more heard and understood was when an AI-generated response was perceived as written by a human. The opposite was also true: people felt less empathy for human-generated responses that were labeled as AI-generated.
Means of the dependent variables across the four conditions. 1) Feeling heard, 2) Accurate response, 3) I was understood by the respondent, and 4) Connection with the respondent
The researchers explored two main reasons why the “AI” label might leave a negative impression on people.
The first reason has to do with how people perceive the “mind” of AI. The less an AI is seen as capable of “thinking” (having agency) and “feeling” (having experience), the less its responses seem meaningful.
This means that for people who believe that AI is capable of some agency and experience, the negative perception due to the “AI” label might be weaker.
The second reason is a possible bias against AI, especially generative AI, which deals with human emotions, a territory usually reserved for people.
Because generative AI is still a new technology, some people react negatively to it, finding it risky or unsuitable for emotional interactions.
People with a more positive view of AI, however, showed less negative reaction to the “AI” label in responses.
In short, the research demonstrates that feeling heard is not only a result of receiving a response that demonstrates understanding, validation, and care, but is also influenced by the source of that response.
A negative attitude toward AI appeared to explain why people felt less heard when they knew a response came from AI, as those with more positive attitudes toward AI were not influenced by the source of the response.
This finding suggests that as people encounter and use AI more frequently, they may feel more positive about it and, as such, feel more heard by AI. It is crucial, however, to distinguish between feeling heard by AI and feeling connected to it.
READ MORE:
AI can help people feel heard, but an AI label diminishes this impact
Yidan Yin, Nan Jia, and Cheryl J. Wakslak
PNAS. Psychological and cognitive sciences. (2024). 121 (14) e2319112121
Abstract:
People want to “feel heard” to perceive that they are understood, validated, and valued. Can AI serve the deeply human function of making others feel heard? Our research addresses two fundamental issues: Can AI generate responses that make human recipients feel heard, and how do human recipients react when they believe the response comes from AI? We conducted an experiment and a follow-up study to disentangle the effects of the actual source of a message and the presumed source. We found that AI-generated messages made recipients feel more heard than human-generated messages and that AI was better at detecting emotions. However, recipients felt less heard when they realized that a message came from AI (vs. human). Finally, in a follow-up study where the responses were rated by third-party raters, we found that compared with humans, AI demonstrated superior discipline in offering emotional support, a crucial element in making individuals feel heard, while avoiding excessive practical suggestions, which may be less effective in achieving this goal. Our research underscores the potential and limitations of AI in meeting human psychological needs. These findings suggest that while AI demonstrates enhanced capabilities to provide emotional support, the devaluation of AI responses poses a key challenge for effectively leveraging AI’s capabilities.
Comentários