skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Attention:The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 7:00 AM ET to 7:30 AM ET on Friday, April 24 due to maintenance. We apologize for the inconvenience.


Title: ERUDITE: Human-in-the-Loop IoT for an Adaptive Personalized Learning System
Thanks to the rapid growth in wearable technologies and advancements in machine learning, monitoring complex human contexts becomes feasible, paving the way to develop human-in-the-loop IoT systems that naturally evolve to adapt to the human and environment state autonomously. Nevertheless, a central challenge in designing many of these IoT systems arises from the requirement to infer the human mental state, such as intention, stress, cognition load, or learning ability. While different human contexts can be inferred from the fusion of different sensor modalities that can correlate to a particular mental state, the human brain provides a richer sensor modality that gives us more insights into the required human context. This article proposes ERUDITE, a human-in-the-loop IoT system for the learning environment that exploits recent wearable neurotechnology to decode brain signals. Through insights from concept learning theory, ERUDITE can infer the human state of learning and understand when human learning increases or declines. By quantifying human learning as an input sensory signal, ERUDITE can provide adequate personalized feedback to humans in a learning environment to enhance their learning experience. ERUDITE is evaluated across 15 participants and showed that by using the brain signals as a sensor modality to infer the human learning state and providing personalized adaptation to the learning environment, the participants’ learning performance increased on average by 26%. Furthermore, to evaluate ERUDITE practicality and scalability, we showed that ERUDITE can be deployed on an edge-based prototype consuming 75-mW power on average with 100 MB memory footprint.  more » « less
Award ID(s):
2105084
PAR ID:
10524199
Author(s) / Creator(s):
; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Internet of Things Journal
Volume:
11
Issue:
8
ISSN:
2372-2541
Page Range / eLocation ID:
14532 to 14550
Subject(s) / Keyword(s):
Augmented reality (AR) concept learning electroencephalography (EEG) Internet of Things Q-learning reinforcement learning rule-based learning virtual reality (VR) Wisconsin card sorting.
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Reinforcement learning (RL) presents numerous benefits compared to rule-based approaches in various applications. Privacy concerns have grown with the widespread use of RL trained with privacy- sensitive data in IoT devices, especially for human-in-the-loop systems. On the one hand, RL methods enhance the user experience by trying to adapt to the highly dynamic nature of humans. On the other hand, trained policies can leak the user’s private information. Recent attention has been drawn to designing privacy-aware RL algorithms while maintaining an acceptable system utility. A central challenge in designing privacy-aware RL, especially for human-in-the-loop systems, is that humans have intrinsic variability, and their preferences and behavior evolve. The effect of one privacy leak mitigation can differ for the same human or across different humans over time. Hence, we can not design one fixed model for privacy-aware RL that fits all. To that end, we propose adaPARL, an adaptive approach for privacy-aware RL, especially for human-in-the-loop IoT systems. adaPARL provides a personalized privacy-utility trade-off depend- ing on human behavior and preference. We validate the proposed adaPARL on two IoT applications, namely (i) Human-in-the-Loop Smart Home and (ii) Human-in-the-Loop Virtual Reality (VR) Smart Classroom. Results obtained on these two applications validate the generality of adaPARL and its ability to provide a personalized privacy-utility trade-off. On average, adaPARL improves the utility by 57% while reducing the privacy leak by 23% on average. 
    more » « less
  2. While one can characterize mental health using questionnaires, such tools do not provide direct insight into the underlying biology. By linking approaches that visualize brain activity to questionnaires in the context of individualized prediction, we can gain new insights into the biology and behavioral aspects of brain health. Resting-state fMRI (rs-fMRI) can be used to identify biomarkers of these conditions and study patterns of abnormal connectivity. In this work, we estimate mental health quality for individual participants using static functional network connectivity (sFNC) data from rs-fMRI. The deep learning model uses the sFNC data as input to predict four categories of mental health quality and visualize the neural patterns indicative of each group. We used guided gradient class activation maps (guided Grad-CAM) to identify the most discriminative sFNC patterns. The effectiveness of this model was validated using the UK Biobank dataset, in which we showed that our approach outperformed four alternative models by 4-18% accuracy. The proposed model’s performance evaluation yielded a classification accuracy of 76%, 78%, 88%, and 98% for the excellent, good, fair, and poor mental health categories, with poor mental health accuracy being the highest. The findings show distinct sFNC patterns across each group. The patterns associated with excellent mental health consist of the cerebellar-subcortical regions, whereas the most prominent areas in the poor mental health category are in the sensorimotor and visual domains. Thus the combination of rs-fMRI and deep learning opens a promising path for developing a comprehensive framework to evaluate and measure mental health. Moreover, this approach had the potential to guide the development of personalized interventions and enable the monitoring of treatment response. Overall this highlights the crucial role of advanced imaging modalities and deep learning algorithms in advancing our understanding and management of mental health. 
    more » « less
  3. IntroductionWearable sensing technologies incorporated into virtual reality (VR) headsets can provide a number of unique, previously hidden insights into individual and team learning and performance in complex environments. However, the most impactful and efficient mechanisms for conveying multimodal data (e.g., gaze, mental workload, etc) allowing for unique insights are not well understood. This study used a human-centered design process to develop and evaluate a multimodal debriefing dashboard. MethodsTwelve Advanced Cardiac Life Support (ACLS) Instructors completed VR cardiac arrest simulations and participated in interviews and design thinking workshops to identify dashboard features, leading to high-fidelity mock-ups and an interactive prototype. A second group of 6 ACLS instructors then evaluated the prototype by providing mock feedback on simulation performance with and without the dashboard in a pre-post format, followed by surveys assessing usability, acceptability, appropriateness, feasibility, trust, and perceived accuracy. ResultsThe final prototype dashboard features a video, a timeline of clinical actions, automatic error identification, mental workload, and visual attention data. It also generates closed-loop communication reports on a separate webpage. The evaluation study showed that participants provided performance analysis and feedback to students when using the dashboard, targeting specific (non)technical skills aligned with ACLS learning objectives. Participants rated the dashboard highly, with a System Usability Scale score of 88.9%, reflecting above-average usability. ConclusionsAn interactive multimodal debriefing dashboard prototype was developed using a user-design framework with prototyping and iterative feedback. Use of the dashboard resulted in improved evaluation and debriefing practices targeting learning outcomes, with participants rating the dashboard favorably. This dashboard has the potential to enhance simulation-based learning by offering near real-time analytics that promote a deeper understanding of individual and team technical and nontechnical skill acquisition and mastery. 
    more » « less
  4. Human Activity Recognition (HAR) using wearable sensors is an increasingly relevant area for applications in healthcare, rehabilitation, and human–computer interaction. However, publicly available datasets that provide multi-sensor, synchronized data combining inertial and orientation measurements are still limited. This work introduces a publicly available dataset for Human Activity Recognition, captured using wearable sensors placed on the chest, hands, and knees. Each device recorded inertial and orientation data during controlled activity sessions involving participants aged 20 to 70. A standardized acquisition protocol ensured consistent temporal alignment across all signals. The dataset was preprocessed and segmented using a sliding window approach. An initial baseline classification experiment, employing a Convolutional Neural Network (CNN) and Long-Short Term Memory (LSTM) model, demonstrated an average accuracy of 93.5% in classifying activities. The dataset is publicly available in CSV format and includes raw sensor signals, activity labels, and metadata. This dataset offers a valuable resource for evaluating machine learning models, studying distributed HAR approaches, and developing robust activity recognition pipelines utilizing wearable technologies. 
    more » « less
  5. This work introduces Wearable deep learning (WearableDL) that is a unifying conceptual architecture inspired by the human nervous system, offering the convergence of deep learning (DL), Internet-of-things (IoT), and wearable technologies (WT) as follows: (1) the brain, the core of the central nervous system, represents deep learning for cloud computing and big data processing. (2) The spinal cord (a part of CNS connected to the brain) represents Internet-of-things for fog computing and big data flow/transfer. (3) Peripheral sensory and motor nerves (components of the peripheral nervous system (PNS)) represent wearable technologies as edge devices for big data collection. In recent times, wearable IoT devices have enabled the streaming of big data from smart wearables (e.g., smartphones, smartwatches, smart clothings, and personalized gadgets) to the cloud servers. Now, the ultimate challenges are (1) how to analyze the collected wearable big data without any background information and also without any labels representing the underlying activity; and (2) how to recognize the spatial/temporal patterns in this unstructured big data for helping end-users in decision making process, e.g., medical diagnosis, rehabilitation efficiency, and/or sports performance. Deep learning (DL) has recently gained popularity due to its ability to (1) scale to the big data size (scalability); (2) learn the feature engineering by itself (no manual feature extraction or hand-crafted features) in an end-to-end fashion; and (3) offer accuracy or precision in learning raw unlabeled/labeled (unsupervised/supervised) data. In order to understand the current state-of-the-art, we systematically reviewed over 100 similar and recently published scientific works on the development of DL approaches for wearable and person-centered technologies. The review supports and strengthens the proposed bioinspired architecture of WearableDL. This article eventually develops an outlook and provides insightful suggestions for WearableDL and its application in the field of big data analytics. 
    more » « less