skip to main content

Title: Investigation of Methods to Extract Fetal Electrocardiogram from the Mother’s Abdominal Signal in Practical Scenarios
Monitoring of fetal electrocardiogram (fECG) would provide useful information about fetal wellbeing as well as any abnormal development during pregnancy. Recent advances in flexible electronics and wearable technologies have enabled compact devices to acquire personal physiological signals in the home setting, including those of expectant mothers. However, the high noise level in the daily life renders long-entrenched challenges to extract fECG from the combined fetal/maternal ECG signal recorded in the abdominal area of the mother. Thus, an efficient fECG extraction scheme is a dire need. In this work, we intensively explored various extraction algorithms, including template subtraction (TS), independent component analysis (ICA), and extended Kalman filter (EKF) using the data from the PhysioNet 2013 Challenge. Furthermore, the modified data with Gaussian and motion noise added, mimicking a practical scenario, were utilized to examine the performance of algorithms. Finally, we combined different algorithms together, yielding promising results, with the best performance in the F1 score of 92.61% achieved by an algorithm combining ICA and TS. With the data modified by adding different types of noise, the combination of ICA–TS–ICA showed the highest F1 score of 85.4%. It should be noted that these combined approaches required higher computational complexity, including execution time more » and allocated memory compared with other methods. Owing to comprehensive examination through various evaluation metrics in different extraction algorithms, this study provides insights into the implementation and operation of state-of-the-art fetal and maternal monitoring systems in the era of mobile health. « less
Authors:
; ; ; ; ; ; ; ;
Award ID(s):
1917105
Publication Date:
NSF-PAR ID:
10215665
Journal Name:
Technologies
Volume:
8
Issue:
2
Page Range or eLocation-ID:
33
ISSN:
2227-7080
Sponsoring Org:
National Science Foundation
More Like this
  1. The invasive method of fetal electrocardiogram (fECG) monitoring is widely used with electrodes directly attached to the fetal scalp. There are potential risks such as infection and, thus, it is usually carried out during labor in rare cases. Recent advances in electronics and technologies have enabled fECG monitoring from the early stages of pregnancy through fECG extraction from the combined fetal/maternal ECG (f/mECG) signal recorded non-invasively in the abdominal area of the mother. However, cumbersome algorithms that require the reference maternal ECG as well as heavy feature crafting makes out-of-clinics fECG monitoring in daily life not yet feasible. To address these challenges, we proposed a pure end-to-end deep learning model to detect fetal QRS complexes (i.e., the main spikes observed on a fetal ECG waveform). Additionally, the model has the residual network (ResNet) architecture that adopts the novel 1-D octave convolution (OctConv) for learning multiple temporal frequency features, which in turn reduce memory and computational cost. Importantly, the model is capable of highlighting the contribution of regions that are more prominent for the detection. To evaluate our approach, data from the PhysioNet 2013 Challenge with labeled QRS complex annotations were used in the original form, and the data were thenmore »modified with Gaussian and motion noise, mimicking real-world scenarios. The model can achieve a F1 score of 91.1% while being able to save more than 50% computing cost with less than 2% performance degradation, demonstrating the effectiveness of our method.« less
  2. Fetal electrocardiogram (fECG) assessment is essential throughout pregnancy to monitor the wellbeing and development of the fetus, and to possibly diagnose potential congenital heart defects. Due to the high noise incorporated in the abdominal ECG (aECG) signals, the extraction of fECG has been challenging. And it is even a lot more difficult for fECG extraction if only one channel of aECG is provided, i.e., in a compact patch device. In this paper, we propose a novel algorithm based on the Ensemble Kalman filter (EnKF) for non-invasive fECG extraction from a single-channel aECG signal. To assess the performance of the proposed algorithm, we used our own clinical data, obtained from a pilot study with 10 subjects each of 20 min recording, and data from the PhysioNet 2013 Challenge bank with labeled QRS complex annotations. The proposed methodology shows the average positive predictive value (PPV) of 97.59%, sensitivity (SE) of 96.91%, and F1-score of 97.25% from the PhysioNet 2013 Challenge bank. Our results also indicate that the proposed algorithm is reliable and effective, and it outperforms the recently proposed extended Kalman filter (EKF) based algorithm.
  3. Identity authentication based on Doppler radar respiration sensing is gaining attention as it requires neither contact nor line of sight and does not give rise to privacy concerns associated with video imaging. Prior research demonstrating the recognition of individuals has been limited to isolated single subject scenarios. When two equidistant subjects are present, identification is more challenging due to the interference of respiration motion patterns in the reflected radar signal. In this research, respiratory signature separation techniques are functionally combined with machine learning (ML) classifiers for reliable subject identity authentication. An improved version of the dynamic segmentation algorithm (peak search and triangulation) was proposed, which can extract distinguishable airflow profile-related features (exhale area, inhale area, inhale/exhale speed, and breathing depth) for medium-scale experiments of 20 different participants to examine the feasibility of extraction of an individual’s respiratory features from a combined mixture of motions for subjects. Independent component analysis with the joint approximation of diagonalization of eigenmatrices (ICA-JADE) algorithm was employed to isolate individual respiratory signatures from combined mixtures of breathing patterns. The extracted hyperfeature sets were then evaluated by integrating two different popular ML classifiers, k-nearest neighbor (KNN) and support vector machine (SVM), for subject authentication. Accuracies of 97.5%more »for two-subject experiments and 98.33% for single-subject experiments were achieved, which supersedes the performance of prior reported methods. The proposed identity authentication approach has several potential applications, including security/surveillance, the Internet-of- Things (IoT) applications, virtual reality, and health monitoring.« less
  4. Abstract Background Diabetic retinopathy (DR) is a leading cause of blindness in American adults. If detected, DR can be treated to prevent further damage causing blindness. There is an increasing interest in developing artificial intelligence (AI) technologies to help detect DR using electronic health records. The lesion-related information documented in fundus image reports is a valuable resource that could help diagnoses of DR in clinical decision support systems. However, most studies for AI-based DR diagnoses are mainly based on medical images; there is limited studies to explore the lesion-related information captured in the free text image reports. Methods In this study, we examined two state-of-the-art transformer-based natural language processing (NLP) models, including BERT and RoBERTa, compared them with a recurrent neural network implemented using Long short-term memory (LSTM) to extract DR-related concepts from clinical narratives. We identified four different categories of DR-related clinical concepts including lesions, eye parts, laterality, and severity, developed annotation guidelines, annotated a DR-corpus of 536 image reports, and developed transformer-based NLP models for clinical concept extraction and relation extraction. We also examined the relation extraction under two settings including ‘gold-standard’ setting—where gold-standard concepts were used–and end-to-end setting. Results For concept extraction, the BERT model pretrained withmore »the MIMIC III dataset achieve the best performance (0.9503 and 0.9645 for strict/lenient evaluation). For relation extraction, BERT model pretrained using general English text achieved the best strict/lenient F1-score of 0.9316. The end-to-end system, BERT_general_e2e, achieved the best strict/lenient F1-score of 0.8578 and 0.8881, respectively. Another end-to-end system based on the RoBERTa architecture, RoBERTa_general_e2e, also achieved the same performance as BERT_general_e2e in strict scores. Conclusions This study demonstrated the efficiency of transformer-based NLP models for clinical concept extraction and relation extraction. Our results show that it’s necessary to pretrain transformer models using clinical text to optimize the performance for clinical concept extraction. Whereas, for relation extraction, transformers pretrained using general English text perform better.« less
  5. Abstract Objective We develop natural language processing (NLP) methods capable of accurately classifying tumor attributes from pathology reports given minimal labeled examples. Our hierarchical cancer to cancer transfer (HCTC) and zero-shot string similarity (ZSS) methods are designed to exploit shared information between cancers and auxiliary class features, respectively, to boost performance using enriched annotations which give both location-based information and document level labels for each pathology report. Materials and Methods Our data consists of 250 pathology reports each for kidney, colon, and lung cancer from 2002 to 2019 from a single institution (UCSF). For each report, we classified 5 attributes: procedure, tumor location, histology, grade, and presence of lymphovascular invasion. We develop novel NLP techniques involving transfer learning and string similarity trained on enriched annotations. We compare HCTC and ZSS methods to the state-of-the-art including conventional machine learning methods as well as deep learning methods. Results For our HCTC method, we see an improvement of up to 0.1 micro-F1 score and 0.04 macro-F1 averaged across cancer and applicable attributes. For our ZSS method, we see an improvement of up to 0.26 micro-F1 and 0.23 macro-F1 averaged across cancer and applicable attributes. These comparisons are made after adjusting training data sizesmore »to correct for the 20% increase in annotation time for enriched annotations compared to ordinary annotations. Conclusions Methods based on transfer learning across cancers and augmenting information methods with string similarity priors can significantly reduce the amount of labeled data needed for accurate information extraction from pathology reports.« less