We describe an analysis of speech during time-critical, team-based medical work and its potential to indicate process delays. We analyzed speech intention and sentence types during 39 trauma resuscitations with delays in one of three major lifesaving interventions: intravenous/intraosseous (IV/IO) line insertion, cardiopulmonary and resuscitation (CPR), and intubation. We found a significant difference in patterns of speech during delays vs. speech during non-delayed work. The speech intention during CPR delays, however, differed from the other LSIs, suggesting that context of speech must be considered. These findings will inform the design of a clinical decision support system (CDSS) that will use multiple sensor modalities to alert medical teams to delays in real time. We conclude with design implications and challenges associated with speech-based activity recognition in complex medical processes.
more »
« less
Assessing the Feasibility of Speech-Based Activity Recognition in Dynamic Medical Settings
We describe an experiment conducted with three domain experts to understand how well they can recognize types and performance stages of activities using speech data transcribed from verbal communications during dynamic medical teamwork. The insights gained from this experiment will inform the design of an automatic activity recognition system to alert medical teams to process deviations in real time. We contribute to the literature by (1) characterizing how domain experts perceive the dynamics of activity-related speech, and (2) identifying the challenges associated with system design for speech-based activity recognition in complex team-based work settings.
more »
« less
- Award ID(s):
- 1763509
- PAR ID:
- 10118250
- Date Published:
- Journal Name:
- Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems
- Page Range / eLocation ID:
- LBW0225
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We describe an analysis of speech during time-critical, team-based medical work and its potential to indicate process delays. We analyzed speech intention and sentence types during 39 trauma resuscitations with delays in one of three major lifesaving interventions: intravenous/intraosseous (IV/IO) line insertion, cardiopulmonary and resuscitation (CPR), and intubation. We found a significant difference in patterns of speech during delays vs. speech during non-delayed work. The speech intention during CPR delays, however, differed from the other LSIs, suggesting that context of speech must be considered. These findings will inform the design of a clinical decision support system (CDSS) that will use multiple sensor modalities to alert medical teams to delays in real time. We conclude with design implications and challenges associated with speech-based activity recognition in complex medical processes.more » « less
-
In clinical settings, most automatic recognition systems use visual or sensory data to recognize activities. These systems cannot recognize activities that rely on verbal assessment, lack visual cues, or do not use medical devices. We examined speech-based activity and activity-stage recognition in a clinical domain, making the following contributions. (1) We collected a high-quality dataset representing common activities and activity stages during actual trauma resuscitation events-the initial evaluation and treatment of critically injured patients. (2) We introduced a novel multimodal network based on audio signal and a set of keywords that does not require a high-performing automatic speech recognition (ASR) engine. (3) We designed novel contextual modules to capture dynamic dependencies in team conversations about activities and stages during a complex workflow. (4) We introduced a data augmentation method, which simulates team communication by combining selected utterances and their audio clips, and showed that this method contributed to performance improvement in our data-limited scenario. In offline experiments, our proposed context-aware multimodal model achieved F1-scores of 73.2±0.8% and 78.1±1.1% for activity and activity-stage recognition, respectively. In online experiments, the performance declined about 10% for both recognition types when using utterance-level segmentation of the ASR output. The performance declined about 15% when we omitted the utterance-level segmentation. Our experiments showed the feasibility of speech-based activity and activity-stage recognition during dynamic clinical events.more » « less
-
An important dimension of classroom group dynamics & collaboration is how much each person contributes to the discussion. With the goal of distinguishing teachers' speech from children's speech and measuring how much each student speaks, we have investigated how automatic speaker diarization can be built to handle real-world classroom group discussions. We examined key design considerations such as the level of granularity of speaker assignment, speech enhancement techniques, voice activity detection, and embedding assignment methods to find an effective configuration. The best speaker diarization system we found was based on the ECAPA-TDNN speaker embedding model and used Whisper automatic speech recognition to identify speech segments. The diarization error rate (DER) in challenging noisy spontaneous classroom data was around 34%, and the correlations of estimated vs. human annotations of how much each student spoke reached 0.62. The accuracy of distinguishing teachers' speech from children's speech was 69.17%. We evaluated the system for potential accuracy bias across people of different skin tones and genders and found that the accuracy did not show statistically significantly differences across either dimension. Thus, the presented diarization system has potential to benefit educational research and to provide teachers and students with useful feedback to better understand their classroom dynamics.more » « less
-
We describe an initial analysis of speech during team-based medical scenarios and its potential to indicate process delays in an emergency medical setting. We analyzed the speech of trauma resuscitation teams in cases with delayed intravenous/intraosseous (IV/IO) line placement, a significant contributor to delays during life-saving interventions. The insights gained from this analysis will inform the design of a clinical decision support system (CDSS) that will use multiple sensor modalities to alert medical teams to errors in real time. We contribute to the literature by determining how the intention of each speech line and the sentence can support real-time, automatic detection of delays during time-critical team activities.more » « less
An official website of the United States government

