"I’m Hurt Too": The Effect of a Chatbot's Reciprocal Self-Disclosures on Users’ Painful Experiences

Liz L. Chung, Jeannie Kang

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Background People often refrain from disclosing their vulnerabilities because they fear being negatively judged, especially when dealing with painful experiences. In such situations, artificial intelligence (AI) agents have emerged as potential solutions since they are less judgmental, and by nature and design, more accepting than humans. However, reservations exist regarding chatbot engagement in reciprocal self-disclosure. Disclosers must believe that listeners are responding with true understanding. However, AI agents are often perceived as emotionless and incapable of understanding human emotions. To address this concern, we develop a chatbot prototype that could disclose its own experiences and emotions, aiming to enhance users’ belief in its emotional capabilities, and investigate how the chatbot’s reciprocal self-disclosures affect the emotional support that it provides to users. Methods We developed a chatbot prototype with five key interactions for reciprocal self-disclosure and defined three distinct levels of chatbot self-disclosure: non-disclosure (ND) or no reciprocal disclosure; low-disclosure (LD) in which the chatbot only discloses its preferences or general insights; and high-disclosure (HD) in which the chatbot discloses its experiences, rationales, emotions, or vulnerabilities. We randomly assigned twenty-one native Korean-speaking participants between 20 and 30 years old to three groups and exposed each of them exclusively to one level of chatbot self-disclosure. Each of them engaged in a single individual conversation with the chatbot, and we assessed the outcomes through post-study interviews that measured participants’ trust in the chatbot, feelings of intimacy with the chatbot, enjoyment of the conversation, and feelings of relief after the conversation. Results The chatbot’s reciprocal self-disclosures influenced users’ trust via three specific factors: users’ perceptions of the chatbot’s empathy, users’ sense of being acknowledged, and users’ feelings regarding the chatbot’s problem-solving abilities. The chatbot also created enjoyable interactions and gave users a sense of relief. However, users’ preconceptions regarding chatbots’ emotional capacities and the uncanny valley effect pose challenges to developing a feeling of intimacy between users and chatbots. Conclusions The study provides valuable insights regarding the use of reciprocal self-disclosure in the design and implementation of AI chatbots for emotional support. While this study contributes to the scholarly understanding of AI’s reciprocal self-disclosure in providing emotional support, it has limitations including a small sample size, limited duration and topics, and predetermined self-disclosure levels. Further research is needed to examine the long-term effects of reciprocal self-disclosure and personalized levels of chatbots’ self-disclosure. Moreover, an appropriate level of human-likeness is essential when designing chatbots with reciprocal self-disclosure capabilities.

Original languageEnglish
Pages (from-to)67-84
Number of pages18
JournalArchives of Design Research
Volume36
Issue number4
DOIs
StatePublished - 2023

Bibliographical note

Publisher Copyright:
© This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/bync/3.0/), which permits unrestricted educational and non-commercial use, provided the original work is properly cited.

Keywords

  • Artificial Intelligence
  • Chatbot
  • Emotional Support
  • Human-AI Interaction
  • Reciprocal Self-Disclosure

Fingerprint

Dive into the research topics of '"I’m Hurt Too": The Effect of a Chatbot's Reciprocal Self-Disclosures on Users’ Painful Experiences'. Together they form a unique fingerprint.

Cite this