skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on December 21, 2025

Title: WTDUN: Wavelet Tree-Structured Sampling and Deep Unfolding Network for Image Compressed Sensing
Deep unfolding networks have gained increasing attention in the field of compressed sensing (CS) owing to their theoretical interpretability and superior reconstruction performance. However, most existing deep unfolding methods often face the following issues: (1) they learn directly from single-channel images, leading to a simple feature representation that does not fully capture complex features; and (2) they treat various image components uniformly, ignoring the characteristics of different components. To address these issues, we propose a novel wavelet-domain deep unfolding framework named WTDUN, which operates directly on the multi-scale wavelet sub-bands. Our method utilizes the intrinsic sparsity and multi-scale structure of wavelet coefficients to achieve a tree-structured sampling and reconstruction, effectively capturing and highlighting the most important features within images. Specifically, the design of tree-structured reconstruction aims to capture the inter-dependencies among the multi-scale sub-bands, enabling the identification of both fine and coarse features, which can lead to a marked improvement in reconstruction quality. Furthermore, a wavelet domain adaptive sampling method is proposed to greatly improve the sampling capability, which is realized by assigning measurements to each wavelet sub-band based on its importance. Unlike pure deep learning methods that treat all components uniformly, our method introduces a targeted focus on important sub-bands, considering their energy and sparsity. This targeted strategy lets us capture key information more efficiently while discarding less important information, resulting in a more effective and detailed reconstruction. Extensive experimental results on various datasets validate the superior performance of our proposed method.  more » « less
Award ID(s):
2304489
PAR ID:
10632625
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
ACM Transactions on Multimedia Computing, Communications, and Applications
Volume:
21
Issue:
1
ISSN:
1551-6857
Page Range / eLocation ID:
1 to 22
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The number of noisy images required for molecular reconstruction in single-particle cryoelectron microscopy (cryo-EM) is governed by the autocorrelations of the observed, randomly oriented, noisy projection images. In this work, we consider the effect of imposing sparsity priors on the molecule. We use techniques from signal processing, optimization, and applied algebraic geometry to obtain theoretical and computational contributions for this challenging nonlinear inverse problem with sparsity constraints. We prove that molecular structures modeled as sums of Gaussians are uniquely determined by the second-order autocorrelation of their projection images, implying that the sample complexity is proportional to the square of the variance of the noise. This theory improves upon the nonsparse case, where the third-order autocorrelation is required for uniformly oriented particle images and the sample complexity scales with the cube of the noise variance. Furthermore, we build a computational framework to reconstruct molecular structures which are sparse in the wavelet basis. This method combines the sparse representation for the molecule with projection-based techniques used for phase retrieval in X-ray crystallography. 
    more » « less
  2. Feature extraction, such as spectral occupancy, interferer energy and type, or direction-of-arrival, from wideband radio-frequency (RF) signals finds use in a growing number of applications as it enhances RF transceivers with cognitive abilities and enables parameter tuning of traditional RF chains. In power and cost limited applications, e.g., for sensor nodes in the Internet of Things, wideband RF feature extraction with conventional, Nyquist-rate analog-to-digital converters is infeasible. However, the structure of many RF features (such as signal sparsity) enables the use of compressive sensing (CS) techniques that acquire such signals at sub-Nyquist rates; while such CS-based analog-to-information (A2I) converters have the potential to enable low-cost and energy-efficient wideband RF sensing, they suffer from a variety of real-world limitations, such as noise folding, low sensitivity, aliasing, and limited flexibility. This paper proposes a novel CS-based A2I architecture called non-uniform wavelet sampling. Our solution extracts a carefully-selected subset of wavelet coefficients directly in the RF domain, which mitigates the main issues of existing A2I converter architectures. For multi-band RF signals, we propose a specialized variant called non-uniform wavelet bandpass sampling (NUWBS), which further improves sensitivity and reduces hardware complexity by leveraging the multi-band signal structure. We use simulations to demonstrate that NUWBS approaches the theoretical performance limits of ℓ₁-norm-based sparse signal recovery. We investigate hardware-design aspects and show ASIC measurement results for the wavelet generation stage, which highlight the efficacy of NUWBS for a broad range of RF feature extraction tasks in cost- and power-limited applications. 
    more » « less
  3. Abstract In recent years, advances in image processing and machine learning have fueled a paradigm shift in detecting genomic regions under natural selection. Early machine learning techniques employed population-genetic summary statistics as features, which focus on specific genomic patterns expected by adaptive and neutral processes. Though such engineered features are important when training data are limited, the ease at which simulated data can now be generated has led to the recent development of approaches that take in image representations of haplotype alignments and automatically extract important features using convolutional neural networks. Digital image processing methods termed α-molecules are a class of techniques for multiscale representation of objects that can extract a diverse set of features from images. One such α-molecule method, termed wavelet decomposition, lends greater control over high-frequency components of images. Another α-molecule method, termed curvelet decomposition, is an extension of the wavelet concept that considers events occurring along curves within images. We show that application of these α-molecule techniques to extract features from image representations of haplotype alignments yield high true positive rate and accuracy to detect hard and soft selective sweep signatures from genomic data with both linear and nonlinear machine learning classifiers. Moreover, we find that such models are easy to visualize and interpret, with performance rivaling those of contemporary deep learning approaches for detecting sweeps. 
    more » « less
  4. SUMMARY The proliferation of large seismic arrays have opened many new avenues of geophysical research; however, most techniques still fundamentally treat regional and global scale seismic networks as a collection of individual time-series rather than as a single unified data product. Wavefield reconstruction allows us to turn a collection of individual records into a single structured form that treats the seismic wavefield as a coherent 3-D or 4-D entity. We propose a split processing scheme based on a wavelet transform in time and pre-conditioned curvelet-based compressive sensing in space to create a sparse representation of the continuous seismic wavefield with smooth second-order derivatives. Using this representation, we illustrate several applications, including surface wave gradiometry, Helmholtz–Hodge decomposition of the wavefield into irrotational and solenoidal components, and compression and denoising of seismic records. 
    more » « less
  5. Quantitative susceptibility mapping (QSM) involves acquisition and reconstruction of a series of images at multi-echo time points to estimate tissue field, which prolongs scan time and requires specific reconstruction technique. In this paper, we present our new framework, called Learned Acquisition and Reconstruction Op- timization (LARO), which aims to accelerate the multi-echo gradient echo (mGRE) pulse sequence for QSM. Our approach involves optimizing a Cartesian multi-echo k-space sampling pattern with a deep reconstruc- tion network. Next, this optimized sampling pattern was implemented in an mGRE sequence using Cartesian fan-beam k-space segmenting and ordering for prospective scans. Furthermore, we propose to insert a recur- rent temporal feature fusion module into the reconstruction network to capture signal redundancies along echo time. Our ablation studies show that both the optimized sampling pattern and proposed reconstruction strategy help improve the quality of the multi-echo image reconstructions. Generalization experiments show that LARO is robust on the test data with new pathologies and different sequence parameters. Our code is available at https://github.com/Jinwei1209/LARO-QSM.git . 
    more » « less