skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 5:00 PM ET until 11:00 PM ET on Friday, June 21 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Ghasemzadeh, Hassan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Machine learning algorithms are increasingly used for inference and decision-making in embedded systems. Data from sensors are used to train machine learning models for various smart functions of embedded and cyber-physical systems ranging from applications in healthcare, autonomous vehicles, and national security. However, recent studies have shown that machine learning models can be fooled by adding adversarial noise to their inputs. The perturbed inputs are called adversarial examples. Furthermore, adversarial examples designed to fool one machine learning system are also often effective against another system. This property of adversarial examples is calledadversarial transferabilityand has not been explored in wearable systems to date. In this work, we take the first stride in studying adversarial transferability in wearable sensor systems from four viewpoints: (1) transferability between machine learning models; (2) transferability across users/subjects of the embedded system; (3) transferability across sensor body locations; and (4) transferability across datasets used for model training. We present a set of carefully designed experiments to investigate these transferability scenarios. We also propose a threat model describing the interactions of an adversary with the source and target sensor systems in different transferability settings. In most cases, we found high untargeted transferability, whereas targeted transferability success scores varied from 0% to 80%. The transferability of adversarial examples depends on many factors such as the inclusion of data from all subjects, sensor body position, number of samples in the dataset, type of learning algorithm, and the distribution of source and target system dataset. The transferability of adversarial examples decreased sharply when the data distribution of the source and target system became more distinct. We also provide guidelines and suggestions for the community for designing robust sensor systems. 
    more » « less
    Free, publicly-accessible full text available March 31, 2025
  2. Free, publicly-accessible full text available December 1, 2024
  3. Missing data is a very common challenge in health monitoring systems and one reason for that is that they are largely dependent on different types of sensors. A critical characteristic of the sensor-based prediction systems is their dependency on hardware, which is prone to physical limitations that add another layer of complexity to the algorithmic component of the system. For instance, it might not be realistic to assume that the prediction model has access to all sensors at all times. This can happen in the real-world setup if one or more sensors on a device malfunction or temporarily have to be disabled due to power limitations. The consequence of such a scenario is that the model faces ``missing input data'' from those unavailable sensors at the deployment time, and as a result, the quality of prediction can degrade significantly. While the missing input data is a very well-known problem, to the best of our knowledge, no study has been done to efficiently minimize the performance drop when one or more sensors may be unavailable for a significant amount of time. The sensor failure problem investigated in this paper can be viewed as a spatial missing data problem, which has not been explored to date. In this work, we show that the naive known methods of dealing with missing input data such as zero-filling or mean-filling are not suitable for sensor-based prediction and we propose an algorithm that can reconstruct the missing input data for unavailable sensors. Moreover, we show that on the MobiAct, MotionSense, and MHEALTH activity classification benchmarks, our proposed method can outperform the baselines by large accuracy margins of 8.2%, 15.1%, and 11.6%, respectively. 
    more » « less
  4. Postprandial hyperglycemia (PPHG) is detrimental to health and increases risk of cardiovascular diseases, reduced eyesight, and life-threatening conditions like cancer. Detecting PPHG events before they occur can potentially help with providing early interventions. Prior research suggests that PPHG events can be predicted based on information about diet. However, such computational approaches (1) are data hungry requiring significant amounts of data for algorithm training; and (2) work as a black-box and lack interpretability, thus limiting the adoption of these technologies for use in clinical interventions. Motivated by these shortcomings, we propose, DietNudge 1 , a machine learning based framework that integrates multi-modal data about diet, insulin, and blood glucose to predict PPHG events before they occur. Using data from patients with diabetes, we demonstrate that our model can predict PPHG events with up to 90% classification accuracy and an average F1 score of 0.93. The proposed decision-tree-based approach also identifies modifiable factors that contribute to an impending PPHG event while providing personalized thresholds to prevent such events. Our results suggest that we can develop simple, yet effective, computational algorithms that can be used as preventative mechanisms for diabetes and obesity management. 
    more » « less
  5. The recent success of deep neural networks in prediction tasks on wearable sensor data is evident. However, in more practical online learning scenarios, where new data arrive sequentially, neural networks suffer severely from the ``catastrophic forgetting`` problem. In real-world settings, given a pre-trained model on the old data, when we collect new data, it is practically infeasible to re-train the model on both old and new data because the computational costs will increase dramatically as more and more data arrive in time. However, if we fine-tune the model only with the new data because the new data might be different from the old data, the neural network parameters will change to fit the new data. As a result, the new parameters are no longer suitable for the old data. This phenomenon is known as catastrophic forgetting, and continual learning research aims to overcome this problem with minimal computational costs. While most of the continual learning research focuses on computer vision tasks, implications of catastrophic forgetting in wearable computing research and potential avenues to address this problem have remained unexplored. To address this knowledge gap, we study continual learning for activity recognition using wearable sensor data. We show that the catastrophic forgetting problem is a critical challenge for real-world deployment of machine learning models for wearables. Moreover, we show that the catastrophic forgetting problem can be alleviated by employing various training techniques. 
    more » « less
  6. With rapid growth in unhealthy diet behaviors, implementing strategies that improve healthy eating is becoming increasingly important. One approach to improving diet behavior is to continuously monitor dietary intake (e.g., calorie intake) and provide educational, motivational, and dietary recommendation feedback. Although technologies based on wearable sensors, mobile applications, and light-weight cameras exist to gather diet-related information such as food type and eating time, there remains a gap in research on how to use such information to close the loop and provide feedback to the user to improve healthy diet. We address this knowledge gap by introducing a diet behavior change framework that generates real-time diet recommendations based on a user’s food intake and considering user’s deviation from the suggested diet routine. We formulate the problem of optimal diet recommendation as a sequential decision making problem and design a greedy algorithm that provides diet recommendations such that the amount of change in user’s dietary habits is minimized while ensuring that the user’s diet goal is achieved within a given time-frame. This novel approach is inspired by the Social Cognitive Theory, which emphasizes behavioral monitoring and small incremental goals as being important to behavior change. Our optimization algorithm integrates data from a user’s past dietary intake as well as the USDA nutrition dataset to identify optimal diet changes. We demonstrate the feasibility of our optimization algorithms for diet behavior change using real-data collected in two study cohorts with a combined N=10 healthy participants who recorded their diet for up to 21 days. 
    more » « less
  7. Physical activity is a cornerstone of chronic conditions and one of the most critical factors in reducing the risks of cardiovascular diseases, the leading cause of death in the United States. App-based lifestyle interventions have been utilized to promote physical activity in people with or at risk for chronic conditions. However, these mHealth tools have remained largely static and do not adapt to the changing behavior of the user. In a step toward designing adaptive interventions, we propose BeWell24Plus, a framework for monitoring activity and user engagement and developing computational models for outcome prediction and intervention design. In particular, we focus on devising algorithms that combine data about physical activity and engagement with the app to predict future physical activity performance. Knowing in advance how active a person is going to be in the next day can help with designing adaptive interventions that help individuals achieve their physical activity goals. Our technique combines the recent history of a person's physical activity with app engagement metrics such as when, how often, and for how long the app was used to forecast the near future's activity. We formulate the problem of multimodal activity forecasting and propose an LSTM-based realization of our proposed model architecture, which estimates physical activity outcomes in advance by examining the history of app usage and physical activity of the user. We demonstrate the effectiveness of our forecasting approach using data collected with 58 prediabetic people in a 9-month user study. We show that our multimodal forecasting approach outperforms single-modality forecasting by 2.2$ to 11.1% in mean-absolute-error. 
    more » « less
  8. Stress detection and monitoring is an active area of research with important implications for an individual's personal, professional, and social health. Current approaches for stress classification use traditional machine learning algorithms trained on features computed from multiple sensor modalities. These methods are data and computation-intensive, rely on hand-crafted features, and lack reproducibility. These limitations impede the practical use of stress detection and classification systems in the real world. To overcome these shortcomings, we propose Stressalyzer, a novel stress classification and personalization framework from single-modality sensor data without feature computation and selection. Stressalyzer uses only Electrodermal activity (EDA) sensor data while providing competitive results compared to the state-of-the-art techniques that use multiple sensor modalities and are computationally expensive due to the calculation of large number of features. Using the dataset collected in a laboratory setting from $15$ subjects, our single-channel neural network-based model achieves a classification accuracy of 92.9% and an f1 score of 0.89 for binary stress classification. Our leave-one-subject-out analysis establishes the subjective nature of stress and shows that personalizing stress models using Stressalyzer significantly improves the model performance. Without model personalization, we found a performance decline in 40% of the subjects, suggesting the need for model personalization. 
    more » « less
  9. Automatic lying posture tracking is an important factor in human health monitoring. The increasing popularity of the wrist-based trackers provides the means for unobtrusive, affordable, and long-term monitoring with minimized privacy concerns for the end-users and promising results in detecting the type of physical activity, step counting, and sleep quality assessment. However, there is limited research on development of accurate and efficient lying posture tracking models using wrist-based sensor. Our experiments demonstrate a major drop in the accuracy of the lying posture tracking using wrist-based accelerometer sensor due to the unpredictable noise from arbitrary wrist movements and rotations while sleeping. In this paper, we develop a deep transfer learning method that improves performance of lying posture tracking using noisy data from wrist sensor by transferring the knowledge from an initial setting which contains both clean and noisy data. The proposed solution develops an optimal mapping model from the noisy data to the clean data in the initial setting using LSTM sequence regression, and reconstruct clean synthesized data in another setting where no noisy sensor data is available. This increases the lying posture tracking F1-Score by 24.9% for left-wrist and by 18.1% for right-wrist sensors comparing to the case without mapping. 
    more » « less