skip to main content

Title: Real-Time Context-Aware Detection of Unsafe Events in Robot-Assisted Surgery
Cyber-physical systems for robotic surgery have enabled minimally invasive procedures with increased precision and shorter hospitalization. However, with increasing complexity and connectivity of software and major involvement of human operators in the supervision of surgical robots, there remain significant challenges in ensuring patient safety. This paper presents a safety monitoring system that, given the knowledge of the surgical task being performed by the surgeon, can detect safety-critical events in real-time. Our approach integrates a surgical gesture classifier that infers the operational context from the time-series kinematics data of the robot with a library of erroneous gesture classifiers that given a surgical gesture can detect unsafe events. Our experiments using data from two surgical platforms show that the proposed system can detect unsafe events caused by accidental or malicious faults within an average reaction time window of 1,693 milliseconds and F1 score of 0.88 and human errors within an average reaction time window of 57 milliseconds and F1 score of 0.76.
Award ID(s):
Publication Date:
Journal Name:
2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)
Page Range or eLocation-ID:
385 to 397
Sponsoring Org:
National Science Foundation
More Like this
  1. Robotic-assisted minimally invasive surgery (MIS) has enabled procedures with increased precision and dexterity, but surgical robots are still open loop and require surgeons to work with a tele-operation console providing only limited visual feedback. In this setting, mechanical failures, software faults, or human errors might lead to adverse events resulting in patient complications or fatalities. We argue that impending adverse events could be detected and mitigated by applying context-specific safety constraints on the motions of the robot. We present a context-aware safety monitoring system which segments a surgical task into subtasks using kinematics data and monitors safety constraints specific to each subtask. To test our hypothesis about context specificity of safety constraints, we analyze recorded demonstrations of dry-lab surgical tasks collected from the JIGSAWS database as well as from experiments we conducted on a Raven II surgical robot. Analysis of the trajectory data shows that each subtask of a given surgical procedure has consistent safety constraints across multiple demonstrations by different subjects. Our preliminary results show that violations of these safety constraints lead to unsafe events, and there is often sufficient time between the constraint violation and the safety-critical event to allow for a corrective action.
  2. Opioid use disorder is a medical condition with major social and economic consequences. While ubiquitous physiological sensing technologies have been widely adopted and extensively used to monitor day-to-day activities and deliver targeted interventions to improve human health, the use of these technologies to detect drug use in natural environments has been largely underexplored. The long-term goal of our work is to develop a mobile technology system that can identify high-risk opioid-related events (i.e., development of tolerance in the setting of prescription opioid use, return-to-use events in the setting of opioid use disorder) and deploy just-in-time interventions to mitigate the risk of overdose morbidity and mortality. In the current paper, we take an initial step by asking a crucial question: Can opioid use be detected using physiological signals obtained from a wrist-mounted sensor? Thirty-six individuals who were admitted to the hospital for an acute painful condition and received opioid analgesics as part of their clinical care were enrolled. Subjects wore a noninvasive wrist sensor during this time (1-14 days) that continuously measured physiological signals (heart rate, skin temperature, accelerometry, electrodermal activity, and interbeat interval). We collected a total of 2070 hours (≈ 86 days) of physiological data and observed a totalmore »of 339 opioid administrations. Our results are encouraging and show that using a Channel-Temporal Attention TCN (CTA-TCN) model, we can detect an opioid administration in a time-window with an F1-score of 0.80, a specificity of 0.77, sensitivity of 0.80, and an AUC of 0.77. We also predict the exact moment of administration in this time-window with a normalized mean absolute error of 8.6% and R2 coefficient of 0.85.« less
  3. Medical Cyber-physical Systems (MCPS) are vul- nerable to accidental or malicious faults that can target their controllers and cause safety hazards and harm to patients. This paper proposes a combined model and data-driven approach for designing context-aware monitors that can detect early signs of hazards and mitigate them in MCPS. We present a framework for formal specification of unsafe system context using Signal Temporal Logic (STL) combined with an optimization method for patient-specific refinement of STL formulas based on real or simulated faulty data from the closed-loop system for the gener- ation of monitor logic. We evaluate our approach in simulation using two state-of-the-art closed-loop Artificial Pancreas Systems (APS). The results show the context-aware monitor achieves up to 1.4 times increase in average hazard prediction accuracy (F1- score) over several baseline monitors, reduces false-positive and false-negative rates, and enables hazard mitigation with a 54% success rate while decreasing the average risk for patients.
  4. null (Ed.)
    The development and validation of computational models to detect daily human behaviors (e.g., eating, smoking, brushing) using wearable devices requires labeled data collected from the natural field environment, with tight time synchronization of the micro-behaviors (e.g., start/end times of hand-to-mouth gestures during a smoking puff or an eating gesture) and the associated labels. Video data is increasingly being used for such label collection. Unfortunately, wearable devices and video cameras with independent (and drifting) clocks make tight time synchronization challenging. To address this issue, we present the Window Induced Shift Estimation method for Synchronization (SyncWISE) approach. We demonstrate the feasibility and effectiveness of our method by synchronizing the timestamps of a wearable camera and wearable accelerometer from 163 videos representing 45.2 hours of data from 21 participants enrolled in a real-world smoking cessation study. Our approach shows significant improvement over the state-of-the-art, even in the presence of high data loss, achieving 90% synchronization accuracy given a synchronization tolerance of 700 milliseconds. Our method also achieves state-of-the-art synchronization performance on the CMU-MMAC dataset.
  5. Despite significant developments in the design of surgical robots and automated techniques for objective evalua- tion of surgical skills, there are still challenges in ensuring safety in robot-assisted minimally-invasive surgery (RMIS). This pa- per presents a runtime monitoring system for the detection of executional errors during surgical tasks through the analysis of kinematic data. The proposed system incorporates dual Siamese neural networks and knowledge of surgical context, including surgical tasks and gestures, their distributional similarities, and common error modes, to learn the differences between normal and erroneous surgical trajectories from small training datasets. We evaluate the performance of the error detection using Siamese networks compared to single CNN and LSTM networks trained with different levels of contextual knowledge and training data, using the dry-lab demonstrations of the Suturing and Needle Passing tasks from the JIGSAWS dataset. Our results show that gesture specific task nonspecific Siamese networks obtain micro F1 scores of 0.94 (Siamese-CNN) and 0.95 (Siamese-LSTM), and perform better than single CNN (0.86) and LSTM (0.87) networks. These Siamese networks also outperform gesture nonspecific task specific Siamese-CNN and Siamese-LSTM models for Suturing and Needle Passing.