skip to main content


Search for: All records

Creators/Authors contains: "Marsic, Ivan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We describe an analysis of speech during time-critical, team-based medical work and its potential to indicate process delays. We analyzed speech intention and sentence types during 39 trauma resuscitations with delays in one of three major lifesaving interventions: intravenous/intraosseous (IV/IO) line insertion, cardiopulmonary and resuscitation (CPR), and intubation. We found a significant difference in patterns of speech during delays vs. speech during non-delayed work. The speech intention during CPR delays, however, differed from the other LSIs, suggesting that context of speech must be considered. These findings will inform the design of a clinical decision support system (CDSS) that will use multiple sensor modalities to alert medical teams to delays in real time. We conclude with design implications and challenges associated with speech-based activity recognition in complex medical processes. 
    more » « less
  2. In clinical settings, most automatic recognition systems use visual or sensory data to recognize activities. These systems cannot recognize activities that rely on verbal assessment, lack visual cues, or do not use medical devices. We examined speech-based activity and activity-stage recognition in a clinical domain, making the following contributions. (1) We collected a high-quality dataset representing common activities and activity stages during actual trauma resuscitation events-the initial evaluation and treatment of critically injured patients. (2) We introduced a novel multimodal network based on audio signal and a set of keywords that does not require a high-performing automatic speech recognition (ASR) engine. (3) We designed novel contextual modules to capture dynamic dependencies in team conversations about activities and stages during a complex workflow. (4) We introduced a data augmentation method, which simulates team communication by combining selected utterances and their audio clips, and showed that this method contributed to performance improvement in our data-limited scenario. In offline experiments, our proposed context-aware multimodal model achieved F1-scores of 73.2±0.8% and 78.1±1.1% for activity and activity-stage recognition, respectively. In online experiments, the performance declined about 10% for both recognition types when using utterance-level segmentation of the ASR output. The performance declined about 15% when we omitted the utterance-level segmentation. Our experiments showed the feasibility of speech-based activity and activity-stage recognition during dynamic clinical events.

     
    more » « less
  3. Designing computerized approaches to support complex teamwork requires an understanding of how activity-related information is relayed among team members. In this paper, we focus on verbal communication and describe a speech-based model that we developed for tracking activity progression during time-critical teamwork. We situated our study in the emergency medical domain of trauma resuscitation and transcribed speech from 104 audio recordings of actual resuscitations. Using the transcripts, we first studied the nature of speech during 34 clinically relevant activities. From this analysis, we identified 11 communicative events across three different stages of activity performance-before, during, and after. For each activity, we created sequential ordering of the communicative events using the concept of narrative schemas. The final speech-based model emerged by extracting and aggregating generalized aspects of the 34 schemas. We evaluated the model performance by using 17 new transcripts and found that the model reliably recognized an activity stage in 98% of activity-related conversation instances. We conclude by discussing these results, their implications for designing computerized approaches that support complex teamwork, and their generalizability to other safety-critical domains. 
    more » « less
  4. We propose TubeR: a simple solution for spatio-temporal video action detection. Different from existing methods that depend on either an off-line actor detector or hand-designed actor-positional hypotheses like proposals or anchors, we propose to directly detect an action tubelet in a video by simultaneously performing action localization and recognition from a single representation. TubeR learns a set of tubelet queries and utilizes a tubelet-attention module to model the dynamic spatio-temporal nature of a video clip, which effectively reinforces the model capacity compared to using actor-positional hypotheses in the spatio-temporal space. For videos containing transitional states or scene changes, we propose a context aware classification head to utilize short-term and long-term context to strengthen action classification, and an action switch regression head for detecting the precise temporal action extent. TubeR directly produces action tubelets with variable lengths and even maintains good results for long video clips. TubeR outperforms the previous state-of-the-art on commonly used action detection datasets AVA, UCF101-24 and JHMDB51-21. Code will be available on GluonCV(https://cv.gluon.ai/). 
    more » « less
  5. We describe an initial analysis of speech during team-based medical scenarios and its potential to indicate process delays in an emergency medical setting. We analyzed the speech of trauma resuscitation teams in cases with delayed intravenous/intraosseous (IV/IO) line placement, a significant contributor to delays during life-saving interventions. The insights gained from this analysis will inform the design of a clinical decision support system (CDSS) that will use multiple sensor modalities to alert medical teams to errors in real time. We contribute to the literature by determining how the intention of each speech line and the sentence can support real-time, automatic detection of delays during time-critical team activities. 
    more » « less
  6. null (Ed.)
  7. null (Ed.)
  8. null (Ed.)