skip to main content


Title: Data-Driven Classification of Human Movements in Virtual Reality–Based Serious Games: Preclinical Rehabilitation Study in Citizen Science
Background Sustained engagement is essential for the success of telerehabilitation programs. However, patients’ lack of motivation and adherence could undermine these goals. To overcome this challenge, physical exercises have often been gamified. Building on the advantages of serious games, we propose a citizen science–based approach in which patients perform scientific tasks by using interactive interfaces and help advance scientific causes of their choice. This approach capitalizes on human intellect and benevolence while promoting learning. To further enhance engagement, we propose performing citizen science activities in immersive media, such as virtual reality (VR). Objective This study aims to present a novel methodology to facilitate the remote identification and classification of human movements for the automatic assessment of motor performance in telerehabilitation. The data-driven approach is presented in the context of a citizen science software dedicated to bimanual training in VR. Specifically, users interact with the interface and make contributions to an environmental citizen science project while moving both arms in concert. Methods In all, 9 healthy individuals interacted with the citizen science software by using a commercial VR gaming device. The software included a calibration phase to evaluate the users’ range of motion along the 3 anatomical planes of motion and to adapt the sensitivity of the software’s response to their movements. During calibration, the time series of the users’ movements were recorded by the sensors embedded in the device. We performed principal component analysis to identify salient features of movements and then applied a bagged trees ensemble classifier to classify the movements. Results The classification achieved high performance, reaching 99.9% accuracy. Among the movements, elbow flexion was the most accurately classified movement (99.2%), and horizontal shoulder abduction to the right side of the body was the most misclassified movement (98.8%). Conclusions Coordinated bimanual movements in VR can be classified with high accuracy. Our findings lay the foundation for the development of motion analysis algorithms in VR-mediated telerehabilitation.  more » « less
Award ID(s):
1928614
NSF-PAR ID:
10332605
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
JMIR Serious Games
Volume:
10
Issue:
1
ISSN:
2291-9279
Page Range / eLocation ID:
e27597
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Advancements in computer‐mediated exercise put forward the feasibility of telerehabilitation, but it remains a challenge to retain patients' engagement in exercises. Building on our previous study demonstrating enhanced engagement in citizen science through social information about others' contributions, we propose a novel framework for effective telerehabilitation that integrates citizen science and social information into physical exercise. We hypothesized that social information about others' contributions would augment engagement in physical activity by encouraging people to invest more effort toward discovery of novel information in a citizen science context. We recruited healthy participants to monitor the environment of a polluted canal by tagging images using a haptic device toward gathering environmental information. Along with the images, we displayed the locations of the tags created by the previous participants. We found that participants increased both the amount and duration of physical activity when presented with a larger number of the previous tags. Further, they increased the diversity of tagged objects by avoiding the locations tagged by the previous participants, thereby generating richer information about the environment. Our results suggest that social information is a viable means to augment engagement in rehabilitation exercise by incentivizing the contribution to scientific activities.

     
    more » « less
  2. null (Ed.)
    An important problem in designing human-robot systems is the integration of human intent and performance in the robotic control loop, especially in complex tasks. Bimanual coordination is a complex human behavior that is critical in many fine motor tasks, including robot-assisted surgery. To fully leverage the capabilities of the robot as an intelligent and assistive agent, online recognition of bimanual coordination could be important. Robotic assistance for a suturing task, for example, will be fundamentally different during phases when the suture is wrapped around the instrument (i.e., making a c- loop), than when the ends of the suture are pulled apart. In this study, we develop an online recognition method of bimanual coordination modes (i.e., the directions and symmetries of right and left hand movements) using geometric descriptors of hand motion. We (1) develop this framework based on ideal trajectories obtained during virtual 2D bimanual path following tasks performed by human subjects operating Geomagic Touch haptic devices, (2) test the offline recognition accuracy of bi- manual direction and symmetry from human subject movement trials, and (3) evalaute how the framework can be used to characterize 3D trajectories of the da Vinci Surgical System’s surgeon-side manipulators during bimanual surgical training tasks. In the human subject trials, our geometric bimanual movement classification accuracy was 92.3% for movement direction (i.e., hands moving together, parallel, or away) and 86.0% for symmetry (e.g., mirror or point symmetry). We also show that this approach can be used for online classification of different bimanual coordination modes during needle transfer, making a C loop, and suture pulling gestures on the da Vinci system, with results matching the expected modes. Finally, we discuss how these online estimates are sensitive to task environment factors and surgeon expertise, and thus inspire future work that could leverage adaptive control strategies to enhance user skill during robot-assisted surgery. 
    more » « less
  3. ABSTRACT

    Camera trap studies have become a popular medium to assess many ecological phenomena including population dynamics, patterns of biodiversity, and monitoring of endangered species. In conjunction with the benefit to scientists, camera traps present an unprecedented opportunity to involve the public in scientific research via image classifications. However, this engagement strategy comes with a myriad of complications. Volunteers vary in their familiarity with wildlife, thus, the accuracy of user‐derived classifications may be biased by the commonness or popularity of species and user‐experience. From an extensive multi‐site camera trap study across Michigan, U.S.A, we compiled and classified images through a public science platform called Michigan ZoomIN. We aggregated responses from 15 independent users per image using multiple consensus methods to assess accuracy by comparing to species identification completed by wildlife experts. We also evaluated how different factors including consensus algorithms, study area, wildlife species, user support, and camera type influenced the accuracy of user‐derived classifications. Overall accuracy of user‐derived classification was 97%; although, several canid (e.g.,Canis lupus, Vulpes vulpes) and mustelid (e.g.,Neovison vison) species were repeatedly difficult to identify by users and had lower accuracy. When validating user‐derived classification, we found that study area, consensus method, and user support best explained accuracy. To overcome hesitancy associated with data collected by untrained participants, we demonstrated their value by showing that the accuracy from volunteers was comparable to experts when classifying North American mammals. Our hierarchical workflow that integrated multiple consensus methods led to more image classifications without extensive training and even when the expertise of the volunteer was unknown. Ultimately, adopting such an approach can harness broader participation, expedite future camera trap data synthesis, and improve allocation of resources by scholars to enhance performance of public participants and increase accuracy of user‐derived data. © 2021 The Wildlife Society.

     
    more » « less
  4. Abstract Purpose Social media users share their ideas, thoughts, and emotions with other users. However, it is not clear how online users would respond to new research outcomes. This study aims to predict the nature of the emotions expressed by Twitter users toward scientific publications. Additionally, we investigate what features of the research articles help in such prediction. Identifying the sentiments of research articles on social media will help scientists gauge a new societal impact of their research articles. Design/methodology/approach Several tools are used for sentiment analysis, so we applied five sentiment analysis tools to check which are suitable for capturing a tweet's sentiment value and decided to use NLTK VADER and TextBlob. We segregated the sentiment value into negative, positive, and neutral. We measure the mean and median of tweets’ sentiment value for research articles with more than one tweet. We next built machine learning models to predict the sentiments of tweets related to scientific publications and investigated the essential features that controlled the prediction models. Findings We found that the most important feature in all the models was the sentiment of the research article title followed by the author count. We observed that the tree-based models performed better than other classification models, with Random Forest achieving 89% accuracy for binary classification and 73% accuracy for three-label classification. Research limitations In this research, we used state-of-the-art sentiment analysis libraries. However, these libraries might vary at times in their sentiment prediction behavior. Tweet sentiment may be influenced by a multitude of circumstances and is not always immediately tied to the paper's details. In the future, we intend to broaden the scope of our research by employing word2vec models. Practical implications Many studies have focused on understanding the impact of science on scientists or how science communicators can improve their outcomes. Research in this area has relied on fewer and more limited measures, such as citations and user studies with small datasets. There is currently a critical need to find novel methods to quantify and evaluate the broader impact of research. This study will help scientists better comprehend the emotional impact of their work. Additionally, the value of understanding the public's interest and reactions helps science communicators identify effective ways to engage with the public and build positive connections between scientific communities and the public. Originality/value This study will extend work on public engagement with science, sociology of science, and computational social science. It will enable researchers to identify areas in which there is a gap between public and expert understanding and provide strategies by which this gap can be bridged. 
    more » « less
  5. Abstract Introduction

    Utilization of telemedicine for health care delivery increased rapidly during the coronavirus disease 2019 (COVID‐19) pandemic. However, physical examination during telehealth visits remains limited. A novel telerehabilitation system—The Augmented Reality‐based Telerehabilitation System with Haptics (ARTESH)—shows promise for performing synchronous, remote musculoskeletal examination.

    Objective

    To assess the potential of ARTESH in remotely examining upper extremity passive range of motion (PROM) and maximum isometric strength (MIS).

    Design

    In this cross‐sectional pilot study, we compared the in‐person (reference standard) and remote evaluations (ARTESH) of participants' upper extremity PROM and MIS in 10 shoulder and arm movements. The evaluators were blinded to each other's results.

    Setting

    Participants underwent in‐person evaluations at a Veterans Affairs hospital's outpatient Physical Medicine and Rehabilitation (PM&R) clinic, and underwent remote examination using ARTESH with the evaluator located at a research lab 30 miles away, connected via a high‐speed network.

    Patients

    Fifteen participants with upper extremity pain and/or weakness.

    Interventions

    Not applicable.

    Main Outcome Measures

    Inter‐rater agreement between in‐person and remote evaluations on 10 PROM and MIS movements and presence/absence of pain with movement was calculated.

    Results

    The highest inter‐rater agreements were noted in shoulder abduction and protraction PROM (kappa (κ) = 0.44, confidence interval (CI): −0.1 to 1.0), and in elbow flexion, shoulder abduction, and shoulder protraction MIS (κ = 0.63, CI: 0 to 1.0).

    Conclusions

    This pilot study suggests that synchronous tele‐physical examination using the ARTESH system with augmented reality and haptics has the potential to provide enhanced value to existing telemedicine platforms. With the additional technological and procedural improvements and with an adequately powered study, the accuracy of ARTESH‐enabled remote tele‐physical examinations can be better evaluated.

     
    more » « less