Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available June 26, 2025
-
Rehabilitation from musculoskeletal injuries focuses on reestablishing and monitoring muscle activation patterns to accurately produce force. The aim of this study is to explore the use of a novel low-powered wearable distributed Simultaneous Musculoskeletal Assessment with Real-Time Ultrasound (SMART-US) device to predict force during an isometric squat task. Participants (N = 5) performed maximum isometric squats under two medical imaging techniques; clinical musculoskeletal motion mode (m-mode) ultrasound on the dominant vastus lateralis and SMART-US sensors placed on the rectus femoris, vastus lateralis, medial hamstring, and vastus medialis. Ultrasound features were extracted, and a linear ridge regression model was used to predict ground reaction force. The performance of ultrasound features to predict measured force was tested using either the Clinical M-mode, SMART-US sensors on the vastus lateralis (SMART-US: VL), rectus femoris (SMART-US: RF), medial hamstring (SMART-US: MH), and vastus medialis (SMART-US: VMO) or utilized all four SMART-US sensors (Distributed SMART-US). Model training showed that the Clinical M-mode and the Distributed SMART-US model were both significantly different from the SMART-US: VL, SMART-US: MH, SMART-US: RF, and SMART-US: VMO models (p < 0.05). Model validation showed that the Distributed SMART-US model had an R2 of 0.80 ± 0.04 and was significantly different from SMART-US: VL but not from the Clinical M-mode model. In conclusion, a novel wearable distributed SMART-US system can predict ground reaction force using machine learning, demonstrating the feasibility of wearable ultrasound imaging for ground reaction force estimation.
Free, publicly-accessible full text available August 1, 2025 -
Abstract Objective The use of electronic health records (EHRs) for clinical risk prediction is on the rise. However, in many practical settings, the limited availability of task-specific EHR data can restrict the application of standard machine learning pipelines. In this study, we investigate the potential of leveraging language models (LMs) as a means to incorporate supplementary domain knowledge for improving the performance of various EHR-based risk prediction tasks.
Methods We propose two novel LM-based methods, namely “LLaMA2-EHR” and “Sent-e-Med.” Our focus is on utilizing the textual descriptions within structured EHRs to make risk predictions about future diagnoses. We conduct a comprehensive comparison with previous approaches across various data types and sizes.
Results Experiments across 6 different methods and 3 separate risk prediction tasks reveal that employing LMs to represent structured EHRs, such as diagnostic histories, results in significant performance improvements when evaluated using standard metrics such as area under the receiver operating characteristic (ROC) curve and precision-recall (PR) curve. Additionally, they offer benefits such as few-shot learning, the ability to handle previously unseen medical concepts, and adaptability to various medical vocabularies. However, it is noteworthy that outcomes may exhibit sensitivity to a specific prompt.
Conclusion LMs encompass extensive embedded knowledge, making them valuable for the analysis of EHRs in the context of risk prediction. Nevertheless, it is important to exercise caution in their application, as ongoing safety concerns related to LMs persist and require continuous consideration.
-
Abstract There have been significant advances in biosignal extraction techniques to drive external biomechatronic devices or to use as inputs to sophisticated human machine interfaces. The control signals are typically derived from biological signals such as myoelectric measurements made either from the surface of the skin or subcutaneously. Other biosignal sensing modalities are emerging. With improvements in sensing modalities and control algorithms, it is becoming possible to robustly control the target position of an end-effector. It remains largely unknown to what extent these improvements can lead to naturalistic human-like movement. In this paper, we sought to answer this question. We utilized a sensing paradigm called sonomyography based on continuous ultrasound imaging of forearm muscles. Unlike myoelectric control strategies which measure electrical activation and use the extracted signals to determine the velocity of an end-effector; sonomyography measures muscle deformation directly with ultrasound and uses the extracted signals to proportionally control the position of an end-effector. Previously, we showed that users were able to accurately and precisely perform a virtual target acquisition task using sonomyography. In this work, we investigate the time course of the control trajectories derived from sonomyography. We show that the time course of the sonomyography-derived trajectories that users take to reach virtual targets reflect the trajectories shown to be typical for kinematic characteristics observed in biological limbs. Specifically, during a target acquisition task, the velocity profiles followed a minimum jerk trajectory shown for point-to-point arm reaching movements, with similar time to target. In addition, the trajectories based on ultrasound imaging result in a systematic delay and scaling of peak movement velocity as the movement distance increased. We believe this is the first evaluation of similarities in control policies in coordinated movements in jointed limbs, and those based on position control signals extracted at the individual muscle level. These results have strong implications for the future development of control paradigms for assistive technologies.
-
Chen, Zhuo (Ed.)Opioid overdoses within the United States continue to rise and have been negatively impacting the social and economic status of the country. In order to effectively allocate resources and identify policy solutions to reduce the number of overdoses, it is important to understand the geographical differences in opioid overdose rates and their causes. In this study, we utilized data on emergency department opioid overdose (EDOOD) visits to explore the county-level spatio-temporal distribution of opioid overdose rates within the state of Virginia and their association with aggregate socio-ecological factors. The analyses were performed using a combination of techniques including Moran’s I and multilevel modeling. Using data from 2016–2021, we found that Virginia counties had notable differences in their EDOOD visit rates with significant neighborhood-level associations: many counties in the southwestern region were consistently identified as the hotspots (areas with a higher concentration of EDOOD visits) whereas many counties in the northern region were consistently identified as the coldspots (areas with a lower concentration of EDOOD visits). In most Virginia counties, EDOOD visit rates declined from 2017 to 2018. In more recent years (since 2019), the visit rates showed an increasing trend. The multilevel modeling revealed that the change in clinical care factors (i.e., access to care and quality of care) and socio-economic factors (i.e., levels of education, employment, income, family and social support, and community safety) were significantly associated with the change in the EDOOD visit rates. The findings from this study have the potential to assist policymakers in proper resource planning thereby improving health outcomes.more » « less
-
Ultrasound-based sensing of muscle deformation, known as sonomyography, has shown promise for accurately classifying the intended hand grasps of individuals with upper limb loss in offline settings. Building upon this previous work, we present the first demonstration of real-time prosthetic hand control using sonomyography to perform functional tasks. An individual with congenital bilateral limb absence was fitted with sockets containing a low-profile ultrasound transducer placed over forearm muscle tissue in the residual limbs. A classifier was trained using linear discriminant analysis to recognize ultrasound images of muscle contractions for three discrete hand configurations (rest, tripod grasp, index finger point) under a variety of arm positions designed to cover the reachable workspace. A prosthetic hand mounted to the socket was then controlled using this classifier. Using this real-time sonomyographic control, the participant was able to complete three functional tasks that required selecting different hand grasps in order to grasp and move one-inch wooden blocks over a broad range of arm positions. Additionally, these tests were successfully repeated without retraining the classifier across 3 hours of prosthesis use and following simulated donning and doffing of the socket. This study supports the feasibility of using sonomyography to control upper limb prostheses in real-world applications.more » « less
-
Abstract Technological advances in multi-articulated prosthetic hands have outpaced the development of methods to intuitively control these devices. In fact, prosthetic users often cite difficulty of use as a key contributing factor for abandoning their prostheses. To overcome the limitations of the currently pervasive myoelectric control strategies, namely unintuitive proportional control of multiple degrees-of-freedom, we propose a novel approach:
proprioceptive sonomyographic control . Unlike myoelectric control strategies which measure electrical activation of muscles and use the extracted signals to determine the velocity of an end-effector; our sonomyography-based strategy measures mechanical muscle deformation directly with ultrasound and uses the extracted signals to proportionally control the position of an end-effector. Therefore, our sonomyography-based control is congruent with a prosthetic user’s innate proprioception of muscle deformation in the residual limb. In this work, we evaluatedproprioceptive sonomyographic control with 5 prosthetic users and 5 able-bodied participants in a virtual target achievement and holding task for 5 different hand motions. We observed that with limited training, the performance of prosthetic users was comparable to that of able-bodied participants and thus conclude thatproprioceptive sonomyographic control is a robust and intuitive prosthetic control strategy.