skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on September 10, 2026

Title: A soft robotic device for rapid and self-guided intubation
Endotracheal intubation is a critical medical procedure for protecting a patient’s airway. Current intubation technology requires extensive anatomical knowledge, training, technical skill, and a clear view of the glottic opening. However, all of these may be limited during emergency care for trauma and cardiac arrest outside the hospital, where first-pass failure is nearly 35%. To address this challenge, we designed a soft robotic device to autonomously guide a breathing tube into the trachea with the goal of allowing rapid, repeatable, and safe intubation without the need for extensive training, skill, anatomical knowledge, or a glottic view. During initial device testing with highly trained users in a mannequin and a cadaver, we found a 100% success rate and an average intubation duration of under 8 s. We then conducted a preliminary study comparing the device with video laryngoscopy, in which prehospital medical providers with 5 min of device training intubated cadavers. When using the device, users achieved an 87% first-pass success rate and a 96% overall success rate, requiring an average of 1.1 attempts and 21 s for successful intubation, significantly (P = 0.008) faster than with video laryngoscopy. When using video laryngoscopy, the users achieved a 63% first-pass success rate and a 92% overall success rate, requiring an average of 1.6 attempts and 44 s for successful intubation. This preliminary study offers directions for future clinical studies, the next step in testing a device that could address the critical needs of emergency airway management and help democratize intubation.  more » « less
Award ID(s):
1944816
PAR ID:
10637379
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
AAAS
Date Published:
Journal Name:
Science Translational Medicine
Volume:
17
Issue:
815
ISSN:
1946-6234
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. ABSTRACT Medical procedures are an essential part of healthcare delivery, and the acquisition of procedural skills is a critical component of medical education. Unfortunately, procedural skill is not evenly distributed among medical providers. Skills may vary within departments or institutions, and across geographic regions, depending on the provider’s training and ongoing experience. We present a mixed reality real-time communication system to increase access to procedural skill training and to improve remote emergency assistance. Our system allows a remote expert to guide a local operator through a medical procedure. RGBD cameras capture a volumetric view of the local scene including the patient, the operator, and the medical equipment. The volumetric capture is augmented onto the remote expert’s view to allow the expert to spatially guide the local operator using visual and verbal instructions. We evaluated our mixed reality communication system in a study in which experts teach the ultrasound-guided placement of a central venous catheter (CVC) to students in a simulation setting. The study compares state-of-theart video communication against our system. The results indicate that our system enhances and offers new possibilities for visual communication compared to video teleconference-based training. 
    more » « less
  2. Abstract. With TikTok emerging as one of the most popular social mediaplatforms, there is significant potential for science communicators tocapitalize on this success and to share their science with a broad, engagedaudience. While videos of chemistry and physics experiments are prominentamong educational science content on TikTok, videos related to thegeosciences are comparatively lacking, as is an analysis of what types ofgeoscience videos perform well on TikTok. To increase the visibility of thegeosciences and geophysics on TikTok and to determine best strategies forgeoscience communication on the app, we created a TikTok account called“Terra Explore” (@TerraExplore). The Terra Explore account is a jointeffort between science communication specialists at UNAVCO, IRIS(Incorporated Research Institutions for Seismology), and OpenTopography. Weproduced 48 educational geoscience videos over a 4-month period betweenOctober 2021 and February 2022. We evaluated the performance of each videobased on its reach, engagement, and average view duration to determine thequalities of a successful video. Our video topics primarily focused onseismology, earthquakes, topography, lidar (light detection and ranging),and GPS (Global Positioning System), in alignment with our organizationalmissions. Over this time period, our videos garnered over 2 million totalviews, and our account gained over 12 000 followers. The videos thatreceived the most views received nearly all (∼ 97 %) oftheir views from the For You page, TikTok's algorithmic recommendation feed. Wefound that short videos (< 30 s) had a high average view duration,but longer videos (> 60 s) had the highest engagement rates.Lecture-style videos that were approximately 60 s in length had moresuccess in both reach and engagement. Our videos that received the highestnumber of views featured content that was related to a recent newsworthyevent (e.g., an earthquake) or that explained location-based geology of arecognizable area. Our results highlight the algorithm-driven nature ofTikTok, which results in a low barrier to entry and success for new sciencecommunication creators. 
    more » « less
  3. Sepsis is a severe medical illness with over 1.7 million cases reported each year in the United States. Early diagnosis of sepsis is cr- tical to adress adecuate tre remains a major challenge in healthcare due to the nonspecificity of the initial symptoms and the lack of currently available biomarkers that demonstrate sufficient specificity or sensitiv- ity suitable for clinical practice. Wearable optical technologies, such as photoplethysmography (PPG), whic uses optical technology to measure changes in blood volume in peripheral tissues, enabling continuous mon- itoring. Identifying modest physiological changes that indicate sepsis can be challenging since they occur without a body reaction. Deep Learning (DL) models can help overcome the diagnostic gap in sepsis diagnosis and intervention. This study analyzes sepsis-related characteristics in PPG signals utilizing a collection of waveform records from both sepsis and control cases. The proposed model consists of five layers: input sequence, long short-term memory (LSTM), fully-connected, softmax, and classi- fication. The LSTM layer is chosen to extract and filter features from cycles of PPG signals; then, the features pass through a fully-connected layer to be classified. We tested our LSTM-based model on 915 one- second intervals to identify and classify sepsis severity. Our LSTM-based model accurately detected sepsis (91.30% for training and 89.74% for testing). The sepsis severity categorization achieved an accuracy of 85.9% in training and 81.4% in testing. Multiple training attempts were con- ducted to validate the model’s detecting capabilities. Preliminary results show that a deep learning model utilizing an LSTM layer can detect and categorize sepsis using PPG data, potentially allowing for real-time diagnosis and monitoring within a single cycle. 
    more » « less
  4. This dataset was used to determine hydrologic parameters influencing Cape Sable Seaside Sparrow (CSSS) mercury exposure and potential mercury effects on their reproductive success in the Florida Everglades. We collected breast feathers for total mercury determination from juvenile and adult CSSS during (or shortly after) three breeding seasons (March 1 to July 31) and monitored the same individuals' breeding performance (mate status, number of nest attempts, number of successful nest attempts, total productivity of nests, clutch size, total count of eggs, and hatch success). Hydrologic parameters (average water depths, drought length, water recession rate, and hydroperiod) were estimated using the Everglades Depth Estimation Network and in situ depth measurements. Data collection is complete. 
    more » « less
  5. Effective mosquito surveillance and control relies on rapid and accurate identification of mosquito vectors and confounding sympatric species. As adoption of modified mosquito (MM) control techniques has increased, the value of monitoring the success of interventions has gained recognition and has pushed the field away from traditional ‘spray and pray’ approaches. Field evaluation and monitoring of MM control techniques that target specific species require massive volumes of surveillance data involving species-level identifications. However, traditional surveillance methods remain time and labor-intensive, requiring highly trained, experienced personnel. Health districts often lack the resources needed to collect essential data, and conventional entomological species identification involves a significant learning curve to produce consistent high accuracy data. These needs led us to develop MosID: a device that allows for high-accuracy mosquito species identification to enhance capability and capacity of mosquito surveillance programs. The device features high-resolution optics and enables batch image capture and species identification of mosquito specimens using computer vision. While development is ongoing, we share an update on key metrics of the MosID system. The identification algorithm, tested internally across 16 species, achieved 98.4 ± 0.6% % macro F1-score on a dataset of known species, unknown species used in training, and species reserved for testing (species, specimens respectively: 12, 1302; 12, 603; 7, 222). Preliminary user testing showed specimens were processed with MosID at a rate ranging from 181-600 specimens per hour. We also discuss other metrics within technical scope, such as mosquito sex and fluorescence detection, that may further support MM programs. 
    more » « less