skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Graph-Constrained Changepoint Learning Approach for Automatic QRS-Complex Detection
This study presents a new viewpoint on ECG signal analysis by applying a graph-based changepoint detection model to locate R-peak positions. This model is based on a new graph learning algorithm to learn the constraint graph given the labeled ECG data. The proposed learning algorithm starts with a simple initial graph and iteratively edits the graph so that the final graph has the maximum accuracy in R-peak detection. We evaluate the performance of the algorithm on the MIT-BIH Arrhythmia Database. The evaluation results demonstrate that the proposed method can obtain comparable results to other state-of-the-art approaches. The proposed method achieves the overall sensitivity of Sen = 99.64%, positive predictivity of PPR = 99.71%, and detection error rate of DER = 0.19.  more » « less
Award ID(s):
1657260
PAR ID:
10233269
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE Asilomar Conference on Signals, Systems, and Computers ASILOMAR
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Electrocardiogram (ECG) signal is the most commonly used non-invasive tool in the assessment of cardiovascular diseases. Segmentation of the ECG signal to locate its constitutive waves, in particular the R-peaks, is a key step in ECG processing and analysis. Over the years, several segmentation and QRS complex detection algorithms have been proposed with different features; however, their performance highly depends on applying preprocessing steps which makes them unreliable in realtime data analysis of ambulatory care settings and remote monitoring systems, where the collected data is highly noisy. Moreover, some issues still remain with the current algorithms in regard to the diverse morphological categories for the ECG signal and their high computation cost. In this paper, we introduce a novel graph-based optimal changepoint detection (GCCD) method for reliable detection of Rpeak positions without employing any preprocessing step. The proposed model guarantees to compute the globally optimal changepoint detection solution. It is also generic in nature and can be applied to other time-series biomedical signals. Based on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed method achieves overall sensitivity Sen = 99.76, positive predictivity PPR = 99.68, and detection error rate DER = 0.55 which are comparable to other state-of-the-art approaches. 
    more » « less
  2. The COVID-19 pandemic has intensified the need for home-based cardiac health monitoring systems. Despite advancements in electrocardiograph (ECG) and phonocardiogram (PCG) wearable sensors, accurate heart sound segmentation algorithms remain understudied. Existing deep learning models, such as convolutional neural networks (CNN) and recurrent neural networks (RNN), struggle to segment noisy signals using only PCG data. We propose a two-step heart sound segmentation algorithm that analyzes synchronized ECG and PCG signals. The first step involves heartbeat detection using a CNN-LSTM-based model on ECG data, and the second step focuses on beat-wise heart sound segmentation with a 1D U-Net that incorporates multi-modal inputs. Our method leverages temporal correlation between ECG and PCG signals to enhance segmentation performance. To tackle the label-hungry issue in AI-supported biomedical studies, we introduce a segment-wise contrastive learning technique for signal segmentation, overcoming the limitations of traditional contrastive learning methods designed for classification tasks. We evaluated our two-step algorithm using the PhysioNet 2016 dataset and a private dataset from Bayland Scientific, obtaining a 96.43 F1 score on the former. Notably, our segment-wise contrastive learning technique demonstrated effective performance with limited labeled data. When trained on just 1% of labeled PhysioNet data, the model pre-trained on the full unlabeled dataset only dropped 2.88 in the F1 score, outperforming the SimCLR method. Overall, our proposed algorithm and learning technique present promise for improving heart sound segmentation and reducing the need for labeled data. 
    more » « less
  3. This paper first proposes a method of formulating model interpretability in visual understanding tasks based on the idea of unfolding latent structures. It then presents a case study in object detection using popular two-stage region-based convolutional neural network (i.e., R-CNN) detection systems. The proposed method focuses on weakly-supervised extractive rationale generation, that is learning to unfold latent discriminative part configurations of object instances automatically and simultaneously in detection without using any supervision for part configurations. It utilizes a top-down hierarchical and compositional grammar model embedded in a directed acyclic AND-OR Graph (AOG) to explore and unfold the space of latent part configurations of regions of interest (RoIs). It presents an AOGParsing operator that seamlessly integrates with the RoIPooling /RoIAlign operator widely used in R-CNN and is trained end-to-end. In object detection, a bounding box is interpreted by the best parse tree derived from the AOG on-the-fly, which is treated as the qualitatively extractive rationale generated for interpreting detection. In experiments, Faster R-CNN is used to test the proposed method on the PASCAL VOC 2007 and the COCO 2017 object detection datasets. The experimental results show that the proposed method can compute promising latent structures without hurting the performance. 
    more » « less
  4. null (Ed.)
    The growing demand for recording longer ECG signals to improve the effectiveness of IoT-enabled remote clinical healthcare is contributing large amounts of ECG data. While lossy compression techniques have shown potential in significantly lowering the amount of data, investigation on how to trade-off between data reduction and data fidelity on ECG data received relatively less attention. This paper gives insight into the power of lossy compression to ECG signals by balancing between data quality and compression ratio. We evaluate the performance of transformed-based lossy compressions on the ECG datasets collected from the Biosemi ActiveTwo devices. Our experimental results indicate that ECG data exhibit high energy compaction property through transformations like DCT and DWT, thus could improve compression ratios significantly without hurting data fidelity much. More importantly, we evaluate the effect of lossy compression on ECG signals by validating the R-peak in the QRS complex. Our method can obtain low error rates measured in PRD (as low as 0.3) and PSNR (up to 67) using only 5% of the transform coefficients. Therefore, R-peaks in the reconstructed ECG signals are almost identical to ones in the original signals, thus facilitating extended ECG monitoring. 
    more » « less
  5. In graph analytics, a truss is a cohesive subgraph based on the number of triangles supporting each edge. It is widely used for community detection applications such as social networks and security analysis, and the performance of truss analytics highly depends on its triangle counting method. This paper proposes a novel triangle counting kernel named Minimum Search (MS). Minimum Search can select two smaller adjacency lists out of three and uses fine-grained parallelism to improve the performance of triangle counting. Then, two basic algorithms, MS-based triangle counting, and MS-based support updating are developed. Based on the novel triangle counting kernel and the two basic algorithms above, three fundamental parallel truss analytics algorithms are designed and implemented to enable different kinds of graph truss analysis. These truss algorithms include an optimized K-Truss algorithm, a Max-Truss algorithm, and a Truss Decomposition algorithm. Moreover, all proposed algorithms have been implemented in the parallel language Chapel and integrated into an open-source framework, Arkouda. Through Arkouda, data scientists can efficiently conduct graph analysis through an easy-to-use Python interface and handle large-scale graph data in powerful back-end computing resources. Experimental results show that the proposed methods can significantly improve the performance of truss analysis on real-world graphs compared with the existing and widely adopted list intersection-based method. The implemented code is publicly available from GitHub (https://github.com/Bears-R-Us/arkoudanjit). } 
    more » « less