skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Inter-Beat Interval Estimation with Tiramisu Model: A Novel Approach with Reduced Error
Inter-beat interval (IBI) measurement enables estimation of heart-tare variability (HRV) which, in turn, can provide early indication of potential cardiovascular diseases (CVDs). However, extracting IBIs from noisy signals is challenging since the morphology of the signal gets distorted in the presence of noise. Electrocardiogram (ECG) of a person in heavy motion is highly corrupted with noise, known as motion-artifact, and IBI extracted from it is inaccurate. As a part of remote health monitoring and wearable system development, denoising ECG signals and estimating IBIs correctly from them have become an emerging topic among signal-processing researchers. Apart from conventional methods, deep-learning techniques have been successfully used in signal denoising recently, and diagnosis process has become easier, leading to accuracy levels that were previously unachievable. We propose a deep-learning approach leveraging tiramisu autoencoder model to suppress motion-artifact noise and make the R-peaks of the ECG signal prominent even in the presence of high-intensity motion. After denoising, IBIs are estimated more accurately expediting diagnosis tasks. Results illustrate that our method enables IBI estimation from noisy ECG signals with SNR up to -30 dB with average root mean square error (RMSE) of 13 milliseconds for estimated IBIs. At this noise level, our error percentage remains below 8% and outperforms other state-of-the-art techniques.  more » « less
Award ID(s):
2210133
PAR ID:
10527685
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
ACM Transactions on Computing for Healthcare
Volume:
5
Issue:
1
ISSN:
2691-1957
Page Range / eLocation ID:
1 to 19
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Electrocardiogram (ECG) sensing is an important application for the diagnosis of cardiovascular diseases. Recently, driven by the emerging technology of wearable electronics, massive wearable ECG sensors are developed, which however brings additional sources of noise contamination on ECG signals from these wearable ECG sensors. In this paper, we propose a new low-distortion adaptive Savitzky-Golay (LDASG) filtering method for ECG denoising based on discrete curvature estimation, which demonstrates better performance than the state of the art of ECG denoising. The standard Savitzky-Golay (SG) filter has a remarkable performance of data smoothing. However, it lacks adaptability to signal variations and thus often induces signal distortion for high-variation signals such as ECG. In our method, the discrete curvature estimation is adapted to represent the signal variation for the purpose of mitigating signal distortion. By adaptively designing the proper SG filter according to the discrete curvature for each data sample, the proposed method still retains the intrinsic advantage of SG filters of excellent data smoothing and further tackles the challenge of denoising high signal variations with low signal distortion. In our experiment, we compared our method with the EMD-wavelet based method and the non-local means (NLM) denoising method in the performance of both noise elimination and signal distortion reduction. Particularly, for the signal distortion reduction, our method decreases in MSE by 33.33% when compared to EMD-wavelet and by 50% when compared to NLM, and decreases in PRD by 18.25% when compared to EMD-wavelet and by 25.24% when compared to NLM. Our method shows high potential and feasibility in wide applications of ECG denoising for both clinical use and consumer electronics. 
    more » « less
  2. Abstract PurposeTo examine the effect of incorporating self‐supervised denoising as a pre‐processing step for training deep learning (DL) based reconstruction methods on data corrupted by Gaussian noise. K‐space data employed for training are typically multi‐coil and inherently noisy. Although DL‐based reconstruction methods trained on fully sampled data can enable high reconstruction quality, obtaining large, noise‐free datasets is impractical. MethodsWe leverage Generalized Stein's Unbiased Risk Estimate (GSURE) for denoising. We evaluate two DL‐based reconstruction methods: Diffusion Probabilistic Models (DPMs) and Model‐Based Deep Learning (MoDL). We evaluate the impact of denoising on the performance of these DL‐based methods in solving accelerated multi‐coil magnetic resonance imaging (MRI) reconstruction. The experiments were carried out on T2‐weighted brain and fat‐suppressed proton‐density knee scans. ResultsWe observed that self‐supervised denoising enhances the quality and efficiency of MRI reconstructions across various scenarios. Specifically, employing denoised images rather than noisy counterparts when training DL networks results in lower normalized root mean squared error (NRMSE), higher structural similarity index measure (SSIM) and peak signal‐to‐noise ratio (PSNR) across different SNR levels, including 32, 22, and 12 dB for T2‐weighted brain data, and 24, 14, and 4 dB for fat‐suppressed knee data. ConclusionWe showed that denoising is an essential pre‐processing technique capable of improving the efficacy of DL‐based MRI reconstruction methods under diverse conditions. By refining the quality of input data, denoising enables training more effective DL networks, potentially bypassing the need for noise‐free reference MRI scans. 
    more » « less
  3. Here we consider the problem of denoising features associated to complex data, modeled as signals on a graph, via a smoothness prior. This is motivated in part by settings such as single-cell RNA where the data is very high-dimensional, but its structure can be captured via an affinity graph. This allows us to utilize ideas from graph signal processing. In particular, we present algorithms for the cases where the signal is perturbed by Gaussian noise, dropout, and uniformly distributed noise. The signals are assumed to follow a prior distribution defined in the frequency domain which favors signals which are smooth across the edges of the graph. By pairing this prior distribution with our three models of noise generation, we propose Maximum A Posteriori (M.A.P.) estimates of the true signal in the presence of noisy data and provide algorithms for computing the M.A.P. Finally, we demonstrate the algorithms’ ability to effectively restore signals from white noise on image data and from severe dropout in single-cell RNA sequence data. 
    more » « less
  4. Unsupervised denoising is a crucial challenge in real-world imaging applications. Unsupervised deep-learning methods have demonstrated impressive performance on benchmarks based on synthetic noise. However, no metrics are available to evaluate these methods in an unsupervised fashion. This is highly problematic for the many practical applications where ground-truth clean images are not available. In this work, we propose two novel metrics: the unsupervised mean squared error (MSE) and the unsupervised peak signal-to-noise ratio (PSNR), which are computed using only noisy data. We provide a theoretical analysis of these metrics, showing that they are asymptotically consistent estimators of the supervised MSE and PSNR. Controlled numerical experiments with synthetic noise confirm that they provide accurate approximations in practice. We validate our approach on real-world data from two imaging modalities: videos in raw format and transmission electron microscopy. Our results demonstrate that the proposed metrics enable unsupervised evaluation of denoising methods based exclusively on noisy data. 
    more » « less
  5. Unsupervised denoising is a crucial challenge in real-world imaging applications. Unsupervised deep-learning methods have demonstrated impressive performance on benchmarks based on synthetic noise. However, no metrics are available to evaluate these methods in an unsupervised fashion. This is highly problematic for the many practical applications where ground-truth clean images are not available. In this work, we propose two novel metrics: the unsupervised mean squared error (MSE) and the unsupervised peak signal-to-noise ratio (PSNR), which are computed using only noisy data. We provide a theoretical analysis of these metrics, showing that they are asymptotically consistent estimators of the supervised MSE and PSNR. Controlled numerical experiments with synthetic noise confirm that they provide accurate approximations in practice. We validate our approach on real-world data from two imaging modalities: videos in raw format and transmission electron microscopy. Our results demonstrate that the proposed metrics enable unsupervised evaluation of denoising methods based exclusively on noisy data. 
    more » « less