Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
This research introduces an advanced approach to automate the segmentation and quantification of nuclei in fluorescent images through deep learning techniques. Overcoming inherent challenges such as variations in pixel intensities, noisy boundaries, and overlapping edges, our devised pipeline integrates the U-Net architecture with state-of-the-art CNN models, such as EfficientNet. This fusion maintains the efficiency of U-Net while harnessing the superior capabilities of EfficientNet. Crucially, we exclusively utilize high-quality confocal images generated in-house for model training, purposefully avoiding the pitfalls associated with publicly available synthetic data of lower quality. Our training dataset encompasses over 3000 nuclei boundaries, which are meticulously annotated manually to ensure precision and accuracy in the learning process. Additionally, post-processing is implemented to refine segmentation results, providing morphological quantification for each segmented nucleus. Through comprehensive evaluation, our model achieves notable performance metrics, attaining an F1-score of 87% and an Intersection over Union (IoU) value of 80%. Furthermore, its robustness is demonstrated across diverse datasets sourced from various origins, indicative of its broad applicability in automating nucleus extraction and quantification from fluorescent images. This innovative methodology holds significant promise for advancing research efforts across multiple domains by facilitating a deeper understanding of underlying biological processes through automated analysis of fluorescent imagery.more » « less
-
Abstract B-mode ultrasound (US) is often used to noninvasively measure skeletal muscle architecture, which contains human intent information. Extracted features from B-mode images can help improve closed-loop human–robotic interaction control when using rehabilitation/assistive devices. The traditional manual approach to inferring the muscle structural features from US images is laborious, time-consuming, and subjective among different investigators. This paper proposes a clustering-based detection method that can mimic a well-trained human expert in identifying fascicle and aponeurosis and, therefore, compute the pennation angle. The clustering-based architecture assumes that muscle fibers have tubular characteristics. It is robust for low-frequency image streams. We compared the proposed algorithm to two mature benchmark techniques: UltraTrack and ImageJ. The performance of the proposed approach showed higher accuracy in our dataset (frame frequency is 20 Hz), that is, similar to the human expert. The proposed method shows promising potential in automatic muscle fascicle orientation detection to facilitate implementations in biomechanics modeling, rehabilitation robot control design, and neuromuscular disease diagnosis with low-frequency data stream.more » « less
-
Abstract Robotic assistive or rehabilitative devices are promising aids for people with neurological disorders as they help regain normative functions for both upper and lower limbs. However, it remains challenging to accurately estimate human intent or residual efforts non-invasively when using these robotic devices. In this article, we propose a deep learning approach that uses a brightness mode, that is, B-mode, of ultrasound (US) imaging from skeletal muscles to predict the ankle joint net plantarflexion moment while walking. The designed structure of customized deep convolutional neural networks (CNNs) guarantees the convergence and robustness of the deep learning approach. We investigated the influence of the US imaging’s region of interest (ROI) on the net plantarflexion moment prediction performance. We also compared the CNN-based moment prediction performance utilizing B-mode US and sEMG spectrum imaging with the same ROI size. Experimental results from eight young participants walking on a treadmill at multiple speeds verified an improved accuracy by using the proposed US imaging + deep learning approach for net joint moment prediction. With the same CNN structure, compared to the prediction performance by using sEMG spectrum imaging, US imaging significantly reduced the normalized prediction root mean square error by 37.55% ( $ p $ < .001) and increased the prediction coefficient of determination by 20.13% ( $ p $ < .001). The findings show that the US imaging + deep learning approach personalizes the assessment of human joint voluntary effort, which can be incorporated with assistive or rehabilitative devices to improve clinical performance based on the assist-as-needed control strategy.more » « less
-
A hybrid exoskeleton comprising a powered exoskeleton and functional electrical stimulation (FES) is a promising technology for restoration of standing and walking functions after a neurological injury. Its shared control remains challenging due to the need to optimally distribute joint torques among FES and the powered exoskeleton while compensating for the FES-induced muscle fatigue and ensuring performance despite highly nonlinear and uncertain skeletal muscle behavior. This study develops a bi-level hierarchical control design for shared control of a powered exoskeleton and FES to overcome these challenges. A higher-level neural network–based iterative learning controller (NNILC) is derived to generate torques needed to drive the hybrid system. Then, a low-level model predictive control (MPC)-based allocation strategy optimally distributes the torque contributions between FES and the exoskeleton’s knee motors based on the muscle fatigue and recovery characteristics of a participant’s quadriceps muscles. A Lyapunov-like stability analysis proves global asymptotic tracking of state-dependent desired joint trajectories. The experimental results on four non-disabled participants validate the effectiveness of the proposed NNILC-MPC framework. The root mean square error (RMSE) of the knee joint and the hip joint was reduced by 71.96 and 74.57%, respectively, in the fourth iteration compared to the RMSE in the 1st sit-to-stand iteration.more » « less