skip to main content

This content will become publicly available on June 1, 2023

Title: Determining the Significant Kinematic Features for Characterizing Stress during Surgical Tasks Using Spatial Attention
It has been shown that intraoperative stress can have a negative effect on surgeon surgical skills during laparoscopic procedures. For novice surgeons, stressful conditions can lead to significantly higher velocity, acceleration, and jerk of the surgical instrument tips, resulting in faster but less smooth movements. However, it is still not clear which of these kinematic features (velocity, acceleration, or jerk) is the best marker for identifying the normal and stressed conditions. Therefore, in order to find the most significant kinematic feature that is affected by intraoperative stress, we implemented a spatial attention-based Long Short-Term Memory (LSTM) classifier. In a prior IRB approved experiment, we collected data from medical students performing an extended peg transfer task who were randomized into a control group and a group performing the task under external psychological stresses. In our prior work, we obtained “representative” normal or stressed movements from this dataset using kinematic data as the input. In this study, a spatial attention mechanism is used to describe the contribution of each kinematic feature to the classification of normal/stressed movements. We tested our classifier under Leave-One-User-Out (LOUO) cross-validation, and the classifier reached an overall accuracy of 77.11% for classifying “representative” normal and stressed movements using more » kinematic features as the input. More importantly, we also studied the spatial attention extracted from the proposed classifier. Velocity and acceleration on both sides had significantly higher attention for classifying a normal movement ([Formula: see text]); Velocity ([Formula: see text]) and jerk ([Formula: see text]) on nondominant hand had significant higher attention for classifying a stressed movement, and it is worthy noting that the attention of jerk on nondominant hand side had the largest increment when moving from describing normal movements to stressed movements ([Formula: see text]). In general, we found that the jerk on nondominant hand side can be used for characterizing the stressed movements for novice surgeons more effectively. « less
; ; ;
Award ID(s):
Publication Date:
Journal Name:
Journal of Medical Robotics Research
Sponsoring Org:
National Science Foundation
More Like this
  1. Within the pycnocline, where diapycnal mixing is suppressed, both the vertical movement (uplift) of isopycnal surfaces and upward motion along sloping isopycnals supply nutrients to the euphotic layer, but the relative importance of each of these mechanisms is unknown. We present a method for decomposing vertical velocity w into two components in a Lagrangian frame: vertical velocity along sloping isopycnal surfaces [Formula: see text] and the adiabatic vertical velocity of isopycnal surfaces [Formula: see text]. We show that [Formula: see text], where [Formula: see text] is the isopycnal slope and [Formula: see text] is the geometric aspect ratio of the flow, and that [Formula: see text] accounts for 10%–25% of the total vertical velocity w for isopycnal slopes representative of the midlatitude pycnocline. We perform the decomposition of w in a process study model of a midlatitude eddying flow field generated with a range of isopycnal slopes. A spectral decomposition of the velocity components shows that while [Formula: see text] is the largest contributor to vertical velocity, [Formula: see text] is of comparable magnitude at horizontal scales less than about 10 km, that is, at submesoscales. Increasing the horizontal grid resolution of models is known to increase vertical velocity; thismore »increase is disproportionately due to better resolution of [Formula: see text], as is shown here by comparing 1- and 4-km resolution model runs. Along-isopycnal vertical transport can be an important contributor to the vertical flux of tracers, including oxygen, nutrients, and chlorophyll, although we find weak covariance between vertical velocity and nutrient anomaly in our model.

    « less
  2. A reliable neural-machine interface is essential for humans to intuitively interact with advanced robotic hands in an unconstrained environment. Existing neural decoding approaches utilize either discrete hand gesture-based pattern recognition or continuous force decoding with one finger at a time. We developed a neural decoding technique that allowed continuous and concurrent prediction of forces of different fingers based on spinal motoneuron firing information. High-density skin-surface electromyogram (HD-EMG) signals of finger extensor muscle were recorded, while human participants produced isometric flexion forces in a dexterous manner (i.e. produced varying forces using either a single finger or multiple fingers concurrently). Motoneuron firing information was extracted from the EMG signals using a blind source separation technique, and each identified neuron was further classified to be associated with a given finger. The forces of individual fingers were then predicted concurrently by utilizing the corresponding motoneuron pool firing frequency of individual fingers. Compared with conventional approaches, our technique led to better prediction performances, i.e. a higher correlation ([Formula: see text] versus [Formula: see text]), a lower prediction error ([Formula: see text]% MVC versus [Formula: see text]% MVC), and a higher accuracy in finger state (rest/active) prediction ([Formula: see text]% versus [Formula: see text]%). Our decodingmore »method demonstrated the possibility of classifying motoneurons for different fingers, which significantly alleviated the cross-talk issue of EMG recordings from neighboring hand muscles, and allowed the decoding of finger forces individually and concurrently. The outcomes offered a robust neural-machine interface that could allow users to intuitively control robotic hands in a dexterous manner.« less
  3. The endoscopic camera of a surgical robot pro- vides surgeons with a magnified 3D view of the surgical field, but repositioning it increases mental workload and operation time. Poor camera placement contributes to safety-critical events when surgical tools move out of the view of the camera. This paper presents a proof of concept of an autonomous camera system for the Raven II surgical robot that aims to reduce surgeon workload and improve safety by providing an optimal view of the workspace showing all objects of interest. This system uses transfer learning to localize and classify objects of interest within the view of a stereoscopic camera. The positions and centroid of the objects are estimated and a set of control rules determines the movement of the camera towards a more desired view. Our perception module had an accuracy of 61.21% overall for identifying objects of interest and was able to localize both graspers and multiple blocks in the environment. Comparison of the commands proposed by our system with the desired commands from a survey of 13 participants indicates that the autonomous camera system proposes appropriate movements for the tilt and pan of the camera.
  4. Abstract

    Introduction: Tissue injuries are often associated with abnormal blood flow (BF). The ability to assess BF distributions in injured tissues enables objective evaluation of interventions and holds the potential to improve the acute management of these injuries on battlefield. Materials and Methods: We have developed a novel speckle contrast diffuse correlation tomography (scDCT) system for noncontact 3D imaging of tissue BF distributions. In scDCT, a galvo mirror was used to remotely project near-infrared point light to different source positions and an electron multiplying charge-coupled-device was used to detect boundary diffuse speckle contrasts. The normalized boundary data were then inserted into a modified Near-Infrared Fluorescence and Spectral Tomography program for 3D reconstructions of BF distributions. This article reports the first application of scDCT for noncontact 3D imaging of BF distributions in burn wounds. Results: Significant lower BF values were observed in the burned areas/volumes compared to surrounding normal tissues. Conclusions: The unique noncontact 3D imaging capability makes the scDCT applicable for intraoperative assessment of burns/wounds, without risk of infection and without interfering with sterility of the surgical field. The portable scDCT device holds the potential to be used by surgeons in combat surgical hospitals to improve the acute management ofmore »battlefield burn injuries.

    « less
  5. Objective The aim of this study is to measure drivers’ attention to preview and their velocity and acceleration tracking error to evaluate two- and three-dimensional displays for following a winding roadway. Background Display perturbation techniques and Fourier analysis of steering movements can be used to infer drivers’ spatio-temporal distribution of attention to preview. Fourier analysis of tracking error time histories provides measures of position, velocity, and acceleration error. Method Participants tracked a winding roadway with 1 s of preview in low-fidelity driving simulations. Position and rate-aided vehicle dynamics were paired with top-down and windshield displays of the roadway. Results For both vehicle dynamics, tracking was smoother with the windshield display. This display emphasizes nearer preview positions and has a closer correspondence to the control-theoretic optimal attentional distributions for these tasks than the top-down display. This correspondence is interpreted as a form of stimulus–response compatibility. The position error and attentional signal-to-noise ratios did not differ between the two displays with position control, but with more complex rate-aided control much higher position error and much lower attentional signal-to-noise ratios occurred with the top-down display. Conclusion Display-driven influences on the distribution of attention may facilitate tracking with preview when they are similar tomore »optimal attentional distributions derived from control theory. Application Display perturbation techniques can be used to assess spatially distributed attention to evaluate displays and secondary tasks in the context of driving. This methodology can supplement eye movement measurements to determine what information is guiding drivers’ actions.« less