- Award ID(s):
- 1650474
- NSF-PAR ID:
- 10091248
- Date Published:
- Journal Name:
- IEEE International Conference on Big Data (Big Data)
- Page Range / eLocation ID:
- 4984 to 4993
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Arrhythmia is an abnormal heart rhythm that occurs due to the improper operation of the electrical impulses that coordinate the heartbeats. It is one of the most well-known heart conditions (including coronary artery disease, heart failure etc.) that is experienced by millions of people around the world. While there are several types of arrhythmias, not all of them are dangerous or harmful. However, there are arrhythmias that can often lead to death in minutes (e.g, ventricular fibrillation and ventricular tachycardia) even in young people. Thus, the detection of arrhythmia is critical for stopping and reversing its progression and for increasing longevity and life quality. While a doctor can perform different heart-monitoring tests specific to arrhythmias, the electrocardiogram (ECG) is one of the most common ones used either independently or in combination with other tests (to only detect, e.g. echocardiogram, or trigger arrhythmia and, then, detect, e.g. stress test). We propose a machine learning approach that augments the traditional arrhythmia detection approaches via our automatic arrhythmia classification system. It utilizes the texture of the ECG signal in both the temporal and spectro-temporal domains to detect and classify four types of heartbeats. The original ECG signal is first preprocessed, and then, the R-peaks associated with heartbeat estimation are identified. Next, 1D local binary patterns (LBP) in the temporal domain are utilized, while 2D LBPs and texture-based features extracted by a grayscale co-occurrence matrix (GLCM) are utilized in the spectro-temporal domain using the short-time Fourier transform (STFT) and Morse wavelets. Finally, different classifiers, as well as different ECG lead configurations are examined before we determine our proposed time-frequency SVM model, which obtains a maximum accuracy of 99.81%, sensitivity of 98.17%, and specificity of 99.98% when using a 10 cross-validation on the MIT-BIH database.more » « less
-
Vital signs (e.g., heart and respiratory rate) are indicative for health status assessment. Efforts have been made to extract vital signs using radio frequency (RF) techniques (e.g., Wi-Fi, FMCW, UWB), which offer a non-touch solution for continuous and ubiquitous monitoring without users’ cooperative efforts. While RF-based vital signs monitoring is user-friendly, its robustness faces two challenges. On the one hand, the RF signal is modulated by the periodic chest wall displacement due to heartbeat and breathing in a nonlinear manner. It is inherently hard to identify the fundamental heart and respiratory rates (HR and RR) in the presence of higher order harmonics of them and intermodulation between HR and RR, especially when they have overlapping frequency bands. On the other hand, the inadvertent body movements may disturb and distort the RF signal, overwhelming the vital signals, thus inhibiting the parameter estimation of the physiological movement (i.e., heartbeat and breathing). In this paper, we propose DeepVS, a deep learning approach that addresses the aforementioned challenges from the non-linearity and inadvertent movements for robust RF-based vital signs sensing in a unified manner. DeepVS combines 1D CNN and attention models to exploit local features and temporal correlations. Moreover, it leverages a two-stream scheme to integrate features from both time and frequency domains. Additionally, DeepVS unifies the estimation of HR and RR with a multi-head structure, which only adds limited extra overhead (<1%) to the existing model, compared to doubling the overhead using two separate models for HR and RR respectively. Our experiments demonstrate that DeepVS achieves 80-percentile HR/RR errors of 7.4/4.9 beat/breaths per minute (bpm) on a challenging dataset, as compared to 11.8/7.3 bpm of a non-learning solution. Besides, an ablation study has been conducted to quantify the effectiveness of DeepVS.more » « less
-
Background: Monitoring glucose excursions is important in diabetes management. This can be achieved using continuous glucose monitors (CGMs). However, CGMs are expensive and invasive. Thus, alternative low-cost noninvasive wearable sensors capable of predicting glycemic excursions could be a game changer to manage diabetes. Methods: In this article, we explore two noninvasive sensor modalities, electrocardiograms (ECGs) and accelerometers, collected on five healthy participants over two weeks, to predict both hypoglycemic and hyperglycemic excursions. We extract 29 features encompassing heart rate variability features from the ECG, and time- and frequency-domain features from the accelerometer. We evaluated two machine-learning approaches to predict glycemic excursions: a classification model and a regression model. Results: The best model for both hypoglycemia and hyperglycemia detection was the regression model based on ECG and accelerometer data, yielding 76% sensitivity and specificity for hypoglycemia and 79% sensitivity and specificity for hyperglycemia. This had an improvement of 5% in sensitivity and specificity for both hypoglycemia and hyperglycemia when compared with using ECG data alone. Conclusions: Electrocardiogram is a promising alternative not only to detect hypoglycemia but also to predict hyperglycemia. Supplementing ECG data with contextual information from accelerometer data can improve glucose prediction.more » « less
-
To obtain more consistent measurements through the course of a wheat growing season, we conceived and designed an autonomous robotic platform that performs collision avoidance while navigating in crop rows using spatial artificial intelligence (AI). The main constraint the agronomists have is to not run over the wheat while driving. Accordingly, we have trained a spatial deep learning model that helps navigate the robot autonomously in the field while avoiding collisions with the wheat. To train this model, we used publicly available databases of prelabeled images of wheat, along with the images of wheat that we have collected in the field. We used the MobileNet single shot detector (SSD) as our deep learning model to detect wheat in the field. To increase the frame rate for real-time robot response to field environments, we trained MobileNet SSD on the wheat images and used a new stereo camera, the Luxonis Depth AI Camera. Together, the newly trained model and camera could achieve a frame rate of 18–23 frames per second (fps)—fast enough for the robot to process its surroundings once every 2–3 inches of driving. Once we knew the robot accurately detects its surroundings, we addressed the autonomous navigation of the robot. The new stereo camera allows the robot to determine its distance from the trained objects. In this work, we also developed a navigation and collision avoidance algorithm that utilizes this distance information to help the robot see its surroundings and maneuver in the field, thereby precisely avoiding collisions with the wheat crop. Extensive experiments were conducted to evaluate the performance of our proposed method. We also compared the quantitative results obtained by our proposed MobileNet SSD model with those of other state-of-the-art object detection models, such as the YOLO V5 and Faster region-based convolutional neural network (R-CNN) models. The detailed comparative analysis reveals the effectiveness of our method in terms of both model precision and inference speed.
-
Abstract Recent developments of micro‐sensors and flexible electronics allow for the manufacturing of health monitoring devices, including electrocardiogram (ECG) detection systems for inpatient monitoring and ambulatory health diagnosis, by mounting the device on the chest. Although some commercial devices in reported articles show examples of a portable recording of ECG, they lose valuable data due to significant motion artifacts. Here, a new class of strain‐isolating materials, hybrid interfacial physics, and soft material packaging for a strain‐isolated, wearable soft bioelectronic system (SIS) is reported. The fundamental mechanism of sensor‐embedded strain isolation is defined through a combination of analytical and computational studies and validated by dynamic experiments. Comprehensive research of hard‐soft material integration and isolation mechanics provides critical design features to minimize motion artifacts that can occur during both mild and excessive daily activities. A wireless, fully integrated SIS that incorporates a breathable, perforated membrane can measure real‐time, continuous physiological data, including high‐quality ECG, heart rate, respiratory rate, and activities. In vivo demonstration with multiple subjects and simultaneous comparison with commercial devices captures the SIS's outstanding performance, offering real‐world, continuous monitoring of the critical physiological signals with no data loss over eight consecutive hours in daily life, even with exaggerated body movements.