skip to main content


Title: Investigation of Methods to Extract Fetal Electrocardiogram from the Mother’s Abdominal Signal in Practical Scenarios
Monitoring of fetal electrocardiogram (fECG) would provide useful information about fetal wellbeing as well as any abnormal development during pregnancy. Recent advances in flexible electronics and wearable technologies have enabled compact devices to acquire personal physiological signals in the home setting, including those of expectant mothers. However, the high noise level in the daily life renders long-entrenched challenges to extract fECG from the combined fetal/maternal ECG signal recorded in the abdominal area of the mother. Thus, an efficient fECG extraction scheme is a dire need. In this work, we intensively explored various extraction algorithms, including template subtraction (TS), independent component analysis (ICA), and extended Kalman filter (EKF) using the data from the PhysioNet 2013 Challenge. Furthermore, the modified data with Gaussian and motion noise added, mimicking a practical scenario, were utilized to examine the performance of algorithms. Finally, we combined different algorithms together, yielding promising results, with the best performance in the F1 score of 92.61% achieved by an algorithm combining ICA and TS. With the data modified by adding different types of noise, the combination of ICA–TS–ICA showed the highest F1 score of 85.4%. It should be noted that these combined approaches required higher computational complexity, including execution time and allocated memory compared with other methods. Owing to comprehensive examination through various evaluation metrics in different extraction algorithms, this study provides insights into the implementation and operation of state-of-the-art fetal and maternal monitoring systems in the era of mobile health.  more » « less
Award ID(s):
1917105
NSF-PAR ID:
10215665
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Technologies
Volume:
8
Issue:
2
ISSN:
2227-7080
Page Range / eLocation ID:
33
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The invasive method of fetal electrocardiogram (fECG) monitoring is widely used with electrodes directly attached to the fetal scalp. There are potential risks such as infection and, thus, it is usually carried out during labor in rare cases. Recent advances in electronics and technologies have enabled fECG monitoring from the early stages of pregnancy through fECG extraction from the combined fetal/maternal ECG (f/mECG) signal recorded non-invasively in the abdominal area of the mother. However, cumbersome algorithms that require the reference maternal ECG as well as heavy feature crafting makes out-of-clinics fECG monitoring in daily life not yet feasible. To address these challenges, we proposed a pure end-to-end deep learning model to detect fetal QRS complexes (i.e., the main spikes observed on a fetal ECG waveform). Additionally, the model has the residual network (ResNet) architecture that adopts the novel 1-D octave convolution (OctConv) for learning multiple temporal frequency features, which in turn reduce memory and computational cost. Importantly, the model is capable of highlighting the contribution of regions that are more prominent for the detection. To evaluate our approach, data from the PhysioNet 2013 Challenge with labeled QRS complex annotations were used in the original form, and the data were then modified with Gaussian and motion noise, mimicking real-world scenarios. The model can achieve a F1 score of 91.1% while being able to save more than 50% computing cost with less than 2% performance degradation, demonstrating the effectiveness of our method. 
    more » « less
  2. Fetal electrocardiogram (fECG) assessment is essential throughout pregnancy to monitor the wellbeing and development of the fetus, and to possibly diagnose potential congenital heart defects. Due to the high noise incorporated in the abdominal ECG (aECG) signals, the extraction of fECG has been challenging. And it is even a lot more difficult for fECG extraction if only one channel of aECG is provided, i.e., in a compact patch device. In this paper, we propose a novel algorithm based on the Ensemble Kalman filter (EnKF) for non-invasive fECG extraction from a single-channel aECG signal. To assess the performance of the proposed algorithm, we used our own clinical data, obtained from a pilot study with 10 subjects each of 20 min recording, and data from the PhysioNet 2013 Challenge bank with labeled QRS complex annotations. The proposed methodology shows the average positive predictive value (PPV) of 97.59%, sensitivity (SE) of 96.91%, and F1-score of 97.25% from the PhysioNet 2013 Challenge bank. Our results also indicate that the proposed algorithm is reliable and effective, and it outperforms the recently proposed extended Kalman filter (EKF) based algorithm. 
    more » « less
  3. Identity authentication based on Doppler radar respiration sensing is gaining attention as it requires neither contact nor line of sight and does not give rise to privacy concerns associated with video imaging. Prior research demonstrating the recognition of individuals has been limited to isolated single subject scenarios. When two equidistant subjects are present, identification is more challenging due to the interference of respiration motion patterns in the reflected radar signal. In this research, respiratory signature separation techniques are functionally combined with machine learning (ML) classifiers for reliable subject identity authentication. An improved version of the dynamic segmentation algorithm (peak search and triangulation) was proposed, which can extract distinguishable airflow profile-related features (exhale area, inhale area, inhale/exhale speed, and breathing depth) for medium-scale experiments of 20 different participants to examine the feasibility of extraction of an individual’s respiratory features from a combined mixture of motions for subjects. Independent component analysis with the joint approximation of diagonalization of eigenmatrices (ICA-JADE) algorithm was employed to isolate individual respiratory signatures from combined mixtures of breathing patterns. The extracted hyperfeature sets were then evaluated by integrating two different popular ML classifiers, k-nearest neighbor (KNN) and support vector machine (SVM), for subject authentication. Accuracies of 97.5% for two-subject experiments and 98.33% for single-subject experiments were achieved, which supersedes the performance of prior reported methods. The proposed identity authentication approach has several potential applications, including security/surveillance, the Internet-of- Things (IoT) applications, virtual reality, and health monitoring. 
    more » « less
  4. Abstract

    Many studies of Earth surface processes and landscape evolution rely on having accurate and extensive data sets of surficial geologic units and landforms. Automated extraction of geomorphic features using deep learning provides an objective way to consistently map landforms over large spatial extents. However, there is no consensus on the optimal input feature space for such analyses. We explore the impact of input feature space for extracting geomorphic features from land surface parameters (LSPs) derived from digital terrain models (DTMs) using convolutional neural network (CNN)‐based semantic segmentation deep learning. We compare four input feature space configurations: (a) a three‐layer composite consisting of a topographic position index (TPI) calculated using a 50 m radius circular window, square root of topographic slope, and TPI calculated using an annulus with a 2 m inner radius and 10 m outer radius, (b) a single illuminating position hillshade, (c) a multidirectional hillshade, and (d) a slopeshade. We test each feature space input using three deep learning algorithms and four use cases: two with natural features and two with anthropogenic features. The three‐layer composite generally provided lower overall losses for the training samples, a higher F1‐score for the withheld validation data, and better performance for generalizing to withheld testing data from a new geographic extent. Results suggest that CNN‐based deep learning for mapping geomorphic features or landforms from LSPs is sensitive to input feature space. Given the large number of LSPs that can be derived from DTM data and the variety of geomorphic mapping tasks that can be undertaken using CNN‐based methods, we argue that additional research focused on feature space considerations is needed and suggest future research directions. We also suggest that the three‐layer composite implemented here can offer better performance in comparison to using hillshades or other common terrain visualization surfaces and is, thus, worth considering for different mapping and feature extraction tasks.

     
    more » « less
  5. Abstract Background Diabetic retinopathy (DR) is a leading cause of blindness in American adults. If detected, DR can be treated to prevent further damage causing blindness. There is an increasing interest in developing artificial intelligence (AI) technologies to help detect DR using electronic health records. The lesion-related information documented in fundus image reports is a valuable resource that could help diagnoses of DR in clinical decision support systems. However, most studies for AI-based DR diagnoses are mainly based on medical images; there is limited studies to explore the lesion-related information captured in the free text image reports. Methods In this study, we examined two state-of-the-art transformer-based natural language processing (NLP) models, including BERT and RoBERTa, compared them with a recurrent neural network implemented using Long short-term memory (LSTM) to extract DR-related concepts from clinical narratives. We identified four different categories of DR-related clinical concepts including lesions, eye parts, laterality, and severity, developed annotation guidelines, annotated a DR-corpus of 536 image reports, and developed transformer-based NLP models for clinical concept extraction and relation extraction. We also examined the relation extraction under two settings including ‘gold-standard’ setting—where gold-standard concepts were used–and end-to-end setting. Results For concept extraction, the BERT model pretrained with the MIMIC III dataset achieve the best performance (0.9503 and 0.9645 for strict/lenient evaluation). For relation extraction, BERT model pretrained using general English text achieved the best strict/lenient F1-score of 0.9316. The end-to-end system, BERT_general_e2e, achieved the best strict/lenient F1-score of 0.8578 and 0.8881, respectively. Another end-to-end system based on the RoBERTa architecture, RoBERTa_general_e2e, also achieved the same performance as BERT_general_e2e in strict scores. Conclusions This study demonstrated the efficiency of transformer-based NLP models for clinical concept extraction and relation extraction. Our results show that it’s necessary to pretrain transformer models using clinical text to optimize the performance for clinical concept extraction. Whereas, for relation extraction, transformers pretrained using general English text perform better. 
    more » « less