Abstract BackgroundPractitioner and family experiences of pediatric re/habilitation can be inequitable. The Young Children’s Participation and Environment Measure (YC-PEM) is an evidence-based and promising electronic patient-reported outcome measure that was designed with and for caregivers for research and practice. This study examined historically minoritized caregivers’ responses to revised YC-PEM content modifications and their perspectives on core intelligent virtual agent functionality needed to improve its reach for equitable service design. MethodsCaregivers were recruited during a routine early intervention (EI) service visit and met five inclusion criteria: (1) were 18 + years old; (2) identified as the parent or legal guardian of a child 0–3 years old enrolled in EI services for 3 + months; (3) read, wrote, and spoke English; (4) had Internet and telephone access; and (5) identified as a parent or legal guardian of a Black, non-Hispanic child or as publicly insured. Three rounds of semi-structured cognitive interviews (55–90 min each) used videoconferencing to gather caregiver feedback on their responses to select content modifications while completing YC-PEM, and their ideas for core intelligent virtual agent functionality. Interviews were transcribed verbatim, cross-checked for accuracy, and deductively and inductively content analyzed by multiple staff in three rounds. ResultsEight Black, non-Hispanic caregivers from a single urban EI catchment and with diverse income levels (Mdn = $15,001–20,000) were enrolled, with children (M = 21.2 months,SD = 7.73) enrolled in EI. Caregivers proposed three ways to improve comprehension (clarify item wording, remove or simplify terms, add item examples). Environmental item edits prompted caregivers to share how they relate and respond to experiences with interpersonal and institutional discrimination impacting participation. Caregivers characterized three core functions of a virtual agent to strengthen YC-PEM navigation (read question aloud, visual and verbal prompts, more examples and/or definitions). ConclusionsResults indicate four ways that YC-PEM content will be modified to strengthen how providers screen for unmet participation needs and determinants to design pediatric re/habilitation services that are responsive to family priorities. Results also motivate the need for user-centered design of an intelligent virtual agent to strengthen user navigation, prior to undertaking a community-based pragmatic trial of its implementation for equitable practice.
more »
« less
Physical-Virtual Agents for Healthcare Simulation
Conventional Intelligent Virtual Agents (IVAs) focus primarily on the visual and auditory channels for both the agent and the interacting human: the agent displays a visual appearance and speech as output, while processing the human’s verbal and non-verbal behavior as input. However, some interactions, particularly those between a patient and healthcare provider, inherently include tactile components.We introduce an Intelligent Physical-Virtual Agent (IPVA) head that occupies an appropriate physical volume; can be touched; and via human-in-the-loop control can change appearance, listen, speak, and react physiologically in response to human behavior. Compared to a traditional IVA, it provides a physical affordance, allowing for more realistic and compelling human-agent interactions. In a user study focusing on neurological assessment of a simulated patient showing stroke symptoms, we compared the IPVA head with a high-fidelity touch-aware mannequin that has a static appearance. Various measures of the human subjects indicated greater attention, affinity for, and presence with the IPVA patient, all factors that can improve healthcare training.
more »
« less
- PAR ID:
- 10105861
- Date Published:
- Journal Name:
- International Conference on Intelligent Virtual Agents
- Page Range / eLocation ID:
- 99 to 106
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
A learning strategy analysis was performed on the Emotive Virtual Patient System, an augmented reality platform that teaches medical students doctor-patient communication skills. The Emotive Virtual Patient System is a complex mixed reality platform that includes both virtual and human peers/instructors who use natural language processing to provide feedback and dialog modeling as a means to improve patient communication learning outcomes. The learning strategy analysis (i.e., system learning strategy/component review, literature review, and heuristic evaluation of best practices) was conducted on the early system plans to determine its potential in supporting student learning and to provide shortand-long-term design considerations. The analysis identified three major categories for potential consideration: verbal interactions, user groups/system objective monitoring, and security. Specific recommendations were given for each of these areas, as supported by the literature.more » « less
-
Simulating real-world experiences in a safe environment has made virtual human medical simulations a common use case for research and interpersonal communication training. Despite the benefits virtual human medical simulations provide, previous work suggests that users struggle to notice when virtual humans make potentially life-threatening verbal communication mistakes inside virtual human medical simulations. In this work, we performed a 2x2 mixed design user study that had learners (n = 80) attempt to identify verbal communication mistakes made by a virtual human acting as a nurse in a virtual desktop environment. A virtual desktop environment was used instead of a head-mounted virtual reality environment due to Covid-19 limitations. The virtual desktop environment experience allowed us to explore how frequently learners identify verbal communication mistakes in virtual human medical simulations and how perceptions of credibility, reliability, and trustworthiness in the virtual human affect learner error recognition rates. We found that learners struggle to identify infrequent virtual human verbal communication mistakes. Additionally, learners with lower initial trustworthiness ratings are more likely to overlook potentially life-threatening mistakes, and virtual human mistakes temporarily lower learner credibility, reliability, and trustworthiness ratings of virtual humans. From these findings, we provide insights on improving virtual human medical simulation design. Developers can use these insights to design virtual simulations for error identification training using virtual humans.more » « less
-
Augmented Reality (AR) technologies present an exciting new medium for human-robot interactions, enabling new opportunities for both implicit and explicit human-robot communication. For example, these technologies enable physically-limited robots to execute non-verbal interaction patterns such as deictic gestures despite lacking the physical morphology necessary to do so. However, a wealth of HRI research has demonstrated real benefits to physical embodiment (compared to, e.g., virtual robots on screens), suggesting AR augmentation of virtual robot parts could face challenges.In this work, we present empirical evidence comparing the use of virtual (AR) and physical arms to perform deictic gestures that identify virtual or physical referents. Our subjective and objective results demonstrate the success of mixed reality deictic gestures in overcoming these potential limitations, and their successful use regardless of differences in physicality between gesture and referent. These results help to motivate the further deployment of mixed reality robotic systems and provide nuanced insight into the role of mixed-reality technologies in HRI contexts.more » « less
-
Human speech perception is generally optimal in quiet environments, however it becomes more difficult and error prone in the presence of noise, such as other humans speaking nearby or ambient noise. In such situations, human speech perception is improved by speech reading , i.e., watching the movements of a speaker's mouth and face, either consciously as done by people with hearing loss or subconsciously by other humans. While previous work focused largely on speech perception of two-dimensional videos of faces, there is a gap in the research field focusing on facial features as seen in head-mounted displays, including the impacts of display resolution, and the effectiveness of visually enhancing a virtual human face on speech perception in the presence of noise. In this paper, we present a comparative user study ( $N=21$ ) in which we investigated an audio-only condition compared to two levels of head-mounted display resolution ( $$1832\times 1920$$ or $$916\times 960$$ pixels per eye) and two levels of the native or visually enhanced appearance of a virtual human, the latter consisting of an up-scaled facial representation and simulated lipstick (lip coloring) added to increase contrast. To understand effects on speech perception in noise, we measured participants' speech reception thresholds (SRTs) for each audio-visual stimulus condition. These thresholds indicate the decibel levels of the speech signal that are necessary for a listener to receive the speech correctly 50% of the time. First, we show that the display resolution significantly affected participants' ability to perceive the speech signal in noise, which has practical implications for the field, especially in social virtual environments. Second, we show that our visual enhancement method was able to compensate for limited display resolution and was generally preferred by participants. Specifically, our participants indicated that they benefited from the head scaling more than the added facial contrast from the simulated lipstick. We discuss relationships, implications, and guidelines for applications that aim to leverage such enhancements.more » « less