To Bot or Not to Bot?:Analysing Mental Health Data Disclosures

Taylor, Debbie, Melvin, Clare ORCID: https://orcid.org/0000-0001-5621-3417, Aung, Min Hane and Asif, Rameez (2024) To Bot or Not to Bot?:Analysing Mental Health Data Disclosures. In: International Conference on Human-Computer Interaction, 2024-06-29 - 2024-07-04, Washington.

[thumbnail of To_Bot_of_Not_to_Bot___Analysing_Mental_Health_Data_Disclosures___HCII_2024_Submission] PDF (To_Bot_of_Not_to_Bot___Analysing_Mental_Health_Data_Disclosures___HCII_2024_Submission) - Accepted Version
Restricted to Repository staff only until 1 June 2025.

Request a copy

Abstract

Disclosure of personal information about wellbeing and mental health is a nuanced situation requiring trust between agents. Current methods for initial mental health assessments are time and resource intensive. With increases in demand for mental health services and decreases in funding and staffing levels, this paper explores whether conversational agents can be sufficiently ‘trusted’ to collect the sensitive data disclosed in initial mental health assessment, thereby reducing the workload for trained professionals. An initial study identified the desired characteristics of a conversational agent designed for mental health assessment purposes and produced a MoSCoW design framework of desirable features. A second study tested the framework by investigating whether a conversational agent, displaying these desirable human-like features, could establish sufficient trust to collect data, comparable to completing forms online which are sent from a mental health service provider, or requested through social media. Participants (n = 236; female = 58%, non-binary = 5%, Prefer not to say = 1%, age 18–80+yrs) were recruited from a UK mental health service provider and through social media. Of the participants, 50% (n = 126) engaged with the bespoke conversational agent to disclose sensitive personal information in an initial mental health assessment; the remaining participants provided the information by completing the online forms though social media or from a mental health service provider. Results indicate a conversational agent can be used to collect sensitive mental health data for initial assessment. Whilst such a tool may not be appropriate for all individuals and demographics, the conversational agent shows promise for reducing the administrative workload of those in the mental health profession, thus increasing resources for treatment and therapy.

Item Type: Conference or Workshop Item (Paper)
Additional Information: Publisher Copyright: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
Uncontrolled Keywords: sdg 3 - good health and well-being,3* ,/dk/atira/pure/sustainabledevelopmentgoals/good_health_and_well_being
Faculty \ School: Faculty of Social Sciences > School of Psychology
Faculty of Science > School of Computing Sciences
UEA Research Groups: Faculty of Science > Research Groups > Cyber Security Privacy and Trust Laboratory
Faculty of Science > Research Groups > Norwich Epidemiology Centre
Faculty of Medicine and Health Sciences > Research Groups > Norwich Epidemiology Centre
Faculty of Science > Research Groups > Smart Emerging Technologies
Faculty of Science > Research Groups > Colour and Imaging Lab
Faculty of Science > Research Groups > Centre for Photonics and Quantum Science
Related URLs:
Depositing User: LivePure Connector
Date Deposited: 03 Dec 2024 01:11
Last Modified: 03 Dec 2024 01:11
URI: https://ueaeprints.uea.ac.uk/id/eprint/97843
DOI: 10.1007/978-3-031-61379-1_7

Actions (login required)

View Item View Item