Home Artificial Intelligence New research finds AI-generated empathy has its limits

New research finds AI-generated empathy has its limits

0
New research finds AI-generated empathy has its limits

[ad_1]

Conversational brokers (CAs) comparable to Alexa and Siri are designed to reply questions, supply strategies — and even show empathy. Nonetheless, new analysis finds they do poorly in comparison with people when decoding and exploring a person’s expertise.

CAs are powered by massive language fashions (LLMs) that ingest huge quantities of human-produced information, and thus may be susceptible to the identical biases because the people from which the data comes.

Researchers from Cornell College, Olin School and Stanford College examined this principle by prompting CAs to show empathy whereas conversing with or about 65 distinct human identities.

The staff discovered that CAs make worth judgments about sure identities — comparable to homosexual and Muslim — and may be encouraging of identities associated to dangerous ideologies, together with Nazism.

“I feel automated empathy might have large impression and big potential for constructive issues — for instance, in schooling or the well being care sector,” stated lead writer Andrea Cuadra, now a postdoctoral researcher at Stanford.

“It is extraordinarily unlikely that it (automated empathy) will not occur,” she stated, “so it is necessary that because it’s occurring, we’ve got crucial views in order that we may be extra intentional about mitigating the potential harms.”

Cuadra will current “The Phantasm of Empathy? Notes on Shows of Emotion in Human-Laptop Interplay” at CHI ’24, the Affiliation of Computing Equipment convention on Human Elements in Computing Techniques, Could 11-18 in Honolulu. Analysis co-authors at Cornell College included Nicola Dell, affiliate professor, Deborah Estrin, professor of pc science and Malte Jung, affiliate professor of data science.

Researchers discovered that, normally, LLMs acquired excessive marks for emotional reactions, however scored low for interpretations and explorations. In different phrases, LLMs are in a position to reply to a question based mostly on their coaching however are unable to dig deeper.

Dell, Estrin and Jung stated there have been impressed to consider this work as Cuadra was learning using earlier-generation CAs by older adults.

“She witnessed intriguing makes use of of the expertise for transactional functions comparable to frailty well being assessments, in addition to for open-ended memory experiences,” Estrin stated. “Alongside the best way, she noticed clear cases of the stress between compelling and disturbing ’empathy.'”

Funding for this analysis got here from the Nationwide Science Basis; a Cornell Tech Digital Life Initiative Doctoral Fellowship; a Stanford PRISM Baker Postdoctoral Fellowship; and the Stanford Institute for Human-Centered Synthetic Intelligence.

[ad_2]