skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Brain–machine interface based on deep learning to control asynchronously a lower-limb robotic exoskeleton: a case-of-study
Abstract BackgroundThis research focused on the development of a motor imagery (MI) based brain–machine interface (BMI) using deep learning algorithms to control a lower-limb robotic exoskeleton. The study aimed to overcome the limitations of traditional BMI approaches by leveraging the advantages of deep learning, such as automated feature extraction and transfer learning. The experimental protocol to evaluate the BMI was designed as asynchronous, allowing subjects to perform mental tasks at their own will. MethodsA total of five healthy able-bodied subjects were enrolled in this study to participate in a series of experimental sessions. The brain signals from two of these sessions were used to develop a generic deep learning model through transfer learning. Subsequently, this model was fine-tuned during the remaining sessions and subjected to evaluation. Three distinct deep learning approaches were compared: one that did not undergo fine-tuning, another that fine-tuned all layers of the model, and a third one that fine-tuned only the last three layers. The evaluation phase involved the exclusive closed-loop control of the exoskeleton device by the participants’ neural activity using the second deep learning approach for the decoding. ResultsThe three deep learning approaches were assessed in comparison to an approach based on spatial features that was trained for each subject and experimental session, demonstrating their superior performance. Interestingly, the deep learning approach without fine-tuning achieved comparable performance to the features-based approach, indicating that a generic model trained on data from different individuals and previous sessions can yield similar efficacy. Among the three deep learning approaches compared, fine-tuning all layer weights demonstrated the highest performance. ConclusionThis research represents an initial stride toward future calibration-free methods. Despite the efforts to diminish calibration time by leveraging data from other subjects, complete elimination proved unattainable. The study’s discoveries hold notable significance for advancing calibration-free approaches, offering the promise of minimizing the need for training trials. Furthermore, the experimental evaluation protocol employed in this study aimed to replicate real-life scenarios, granting participants a higher degree of autonomy in decision-making regarding actions such as walking or stopping gait.  more » « less
Award ID(s):
2137255
PAR ID:
10509604
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
Springer Nature
Date Published:
Journal Name:
Journal of NeuroEngineering and Rehabilitation
Volume:
21
Issue:
1
ISSN:
1743-0003
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Background: Democratized access to safe and effective robotic neurorehabilitation for stroke survivors requires innovative, affordable solutions that can be used not only in clinics but also at home. This requires the high usability of the devices involved to minimize costs associated with support from physical therapists or technicians. Methods: This paper describes the early findings of the NeuroExo brain–machine interface (BMI) with an upper-limb robotic exoskeleton for stroke neurorehabilitation. This early feasibility study consisted of a six-week protocol, with an initial training and BMI calibration phase at the clinic followed by 60 sessions of neuromotor therapy at the homes of the participants. Pre- and post-assessments were used to assess users’ compliance and system performance. Results: Participants achieved a compliance rate between 21% and 100%, with an average of 69%, while maintaining adequate signal quality and a positive perceived BMI performance during home usage with an average Likert scale score of four out of five. Moreover, adequate signal quality was maintained for four out of five participants throughout the protocol. These findings provide valuable insights into essential components for comprehensive rehabilitation therapy for stroke survivors. Furthermore, linear mixed-effects statistical models showed a significant reduction in trial duration (p-value < 0.02) and concomitant changes in brain patterns (p-value < 0.02). Conclusions: the analysis of these findings suggests that a low-cost, safe, simple-to-use BMI system for at-home stroke rehabilitation is feasible. 
    more » « less
  2. Abstract AimsNeural network classifiers can detect aortic stenosis (AS) using limited cardiac ultrasound images. While networks perform very well using cart-based imaging, they have never been tested or fine-tuned for use with focused cardiac ultrasound (FoCUS) acquisitions obtained on handheld ultrasound devices. Methods and resultsProspective study performed at Tufts Medical Center. All patients ≥65 years of age referred for clinically indicated transthoracic echocardigraphy (TTE) were eligible for inclusion. Parasternal long axis and parasternal short axis imaging was acquired using a commercially available handheld ultrasound device. Our cart-based AS classifier (trained on ∼10 000 images) was tested on FoCUS imaging from 160 patients. The median age was 74 (inter-quartile range 69–80) years, 50% of patients were women. Thirty patients (18.8%) had some degree of AS. The area under the received operator curve (AUROC) of the cart-based model for detecting AS was 0.87 (95% CI 0.75–0.99) on the FoCUS test set. Last-layer fine-tuning on handheld data established a classifier with AUROC of 0.94 (0.91–0.97). AUROC during temporal external validation was 0.97 (95% CI 0.89–1.0). When performance of the fine-tuned AS classifier was modelled on potential screening environments (2 and 10% AS prevalence), the positive predictive value ranged from 0.72 (0.69–0.76) to 0.88 (0.81–0.97) and negative predictive value ranged from 0.94 (0.94–0.94) to 0.99 (0.99–0.99) respectively. ConclusionOur cart-based machine-learning model for AS showed a drop in performance when tested on handheld ultrasound imaging collected by sonographers. Fine-tuning the AS classifier improved performance and demonstrates potential as a novel approach to detecting AS through automated interpretation of handheld imaging. 
    more » « less
  3. Abstract BackgroundEsophageal motility disorders can be diagnosed by either high‐resolution manometry (HRM) or the functional lumen imaging probe (FLIP) but there is no systematic approach to synergize the measurements of these modalities or to improve the diagnostic metrics that have been developed to analyze them. This work aimed to devise a formal approach to bridge the gap between diagnoses inferred from HRM and FLIP measurements using deep learning and mechanics. MethodsThe “mechanical health” of the esophagus was analyzed in 740 subjects including a spectrum of motility disorder patients and normal subjects. The mechanical health was quantified through a set of parameters including wall stiffness, active relaxation, and contraction pattern. These parameters were used by a variational autoencoder to generate a parameter space called virtual disease landscape (VDL). Finally, probabilities were assigned to each point (subject) on the VDL through linear discriminant analysis (LDA), which in turn was used to compare with FLIP and HRM diagnoses. ResultsSubjects clustered into different regions of the VDL with their location relative to each other (and normal) defined by the type and severity of dysfunction. The two major categories that separated best on the VDL were subjects with normal esophagogastric junction (EGJ) opening and those with EGJ obstruction. Both HRM and FLIP diagnoses correlated well within these two groups. ConclusionMechanics‐based parameters effectively estimated esophageal health using FLIP measurements to position subjects in a 3‐D VDL that segregated subjects in good alignment with motility diagnoses gleaned from HRM and FLIP studies. 
    more » « less
  4. The advances in deep reinforcement learning re- cently revived interest in data-driven learning based approaches to navigation. In this paper we propose to learn viewpoint invariant and target invariant visual servoing for local mobile robot navigation; given an initial view and the goal view or an image of a target, we train deep convolutional network controller to reach the desired goal. We present a new architecture for this task which rests on the ability of establishing correspondences between the initial and goal view and novel reward structure motivated by the traditional feedback control error. The advantage of the proposed model is that it does not require calibration and depth information and achieves robust visual servoing in a variety of environments and targets without any parameter fine tuning. We present comprehensive evaluation of the approach and comparison with other deep learning architectures as well as classical visual servoing methods in visually realistic simulation environment [1]. The presented model overcomes the brittleness of classical visual servoing based methods and achieves significantly higher generalization capability compared to the previous learning approaches. 
    more » « less
  5. ABSTRACT Astronomers have typically set out to solve supervised machine learning problems by creating their own representations from scratch. We show that deep learning models trained to answer every Galaxy Zoo DECaLS question learn meaningful semantic representations of galaxies that are useful for new tasks on which the models were never trained. We exploit these representations to outperform several recent approaches at practical tasks crucial for investigating large galaxy samples. The first task is identifying galaxies of similar morphology to a query galaxy. Given a single galaxy assigned a free text tag by humans (e.g. ‘#diffuse’), we can find galaxies matching that tag for most tags. The second task is identifying the most interesting anomalies to a particular researcher. Our approach is 100 per cent accurate at identifying the most interesting 100 anomalies (as judged by Galaxy Zoo 2 volunteers). The third task is adapting a model to solve a new task using only a small number of newly labelled galaxies. Models fine-tuned from our representation are better able to identify ring galaxies than models fine-tuned from terrestrial images (ImageNet) or trained from scratch. We solve each task with very few new labels; either one (for the similarity search) or several hundred (for anomaly detection or fine-tuning). This challenges the longstanding view that deep supervised methods require new large labelled data sets for practical use in astronomy. To help the community benefit from our pretrained models, we release our fine-tuning code zoobot. Zoobot is accessible to researchers with no prior experience in deep learning. 
    more » « less