Sparse coding, which refers to modeling a signal as sparse linear combinations of the elements of a learned dictionary, has proven to be a successful (and interpretable) approach in applications such as signal processing, computer vision, and medical imaging. While this success has spurred much work on provable guarantees for dictionary recovery when the learned dictionary is the same size as the ground-truth dictionary, work on the setting where the learned dictionary is larger (or over-realized) with respect to the ground truth is comparatively nascent. Existing theoretical results in this setting have been constrained to the case of noise-less data. We show in this work that, in the presence of noise, minimizing the standard dictionary learning objective can fail to recover the elements of the ground-truth dictionary in the over-realized regime, regardless of the magnitude of the signal in the data-generating process. Furthermore, drawing from the growing body of work on self-supervised learning, we propose a novel masking objective for which recovering the ground-truth dictionary is in fact optimal as the signal increases for a large class of data-generating processes. We corroborate our theoretical results with experiments across several parameter regimes showing that our proposed objective also enjoys better empirical performance than the standard reconstruction objective. 
                        more » 
                        « less   
                    
                            
                            Blind Primed Supervised (BLIPS) Learning for MR Image Reconstrution
                        
                    
    
            The work examines a combined supervised-unsupervised framework involving dictionary-based blind learning and deep supervised learning or MR image reconstruction from under-sampled k-space data. A major focus of the work is to investigate the possible synergy of learned features in traditional shallow reconstruction using sparsity-based priors and deep prior-based reconstruction. Specifically, we propose a framework that uses an unrolled network to refine a blind dictionary learning based reconstruction. we compare the proposed method with strictly supervised deep learning-based reconstruction approaches on several datasets of varying sized and anatomies. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1838179
- PAR ID:
- 10309648
- Date Published:
- Journal Name:
- International Society for Magnetic Resonance in Medicine
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Sparse coding refers to modeling a signal as sparse linear combinations of the elements of a learned dictionary. Sparse coding has proven to be a successful and interpretable approach in many applications, such as signal processing, computer vision, and medical imaging. While this success has spurred much work on sparse coding with provable guarantees, work on the setting where the learned dictionary is larger (or over-realized) with respect to the ground truth is comparatively nascent. Existing theoretical results in the over-realized regime are limited to the case of noise-less data. In this paper, we show that for over-realized sparse coding in the presence of noise, minimizing the standard dictionary learning objective can fail to recover the ground-truth dictionary, regardless of the magnitude of the signal in the data-generating process. Furthermore, drawing from the growing body of work on self-supervised learning, we propose a novel masking objective and we prove that minimizing this new objective can recover the ground-truth dictionary. We corroborate our theoretical results with experiments across several parameter regimes, showing that our proposed objective enjoys better empirical performance than the standard reconstruction objective.more » « less
- 
            Blind inpainting algorithms based on deep learning architectures have shown a remarkable performance in recent years, typically outperforming model-based methods both in terms of image quality and run time. However, neural network strategies typically lack a theoretical explanation, which contrasts with the well-understood theory underlying model-based methods. In this work, we leverage the advantages of both approaches by integrating theoretically founded concepts from transform domain methods and sparse approximations into a CNN-based approach for blind image inpainting. To this end, we present a novel strategy to learn convolutional kernels that applies a specifically designed filter dictionary whose elements are linearly combined with trainable weights. Numerical experiments demonstrate the competitiveness of this approach. Our results show not only an improved inpainting quality compared to conventional CNNs but also significantly faster network convergence within a lightweight network design. Our code is available at https://github.com/cv-stuttgart/SDPF_Blind-Inpainting.more » « less
- 
            This work discusses an optimization framework to embed dictionary learning frameworks with the wave equation as a strategy for incorporating prior scientific knowledge into a machine learning algorithm. We modify dictionary learning to study ultrasonic guided wave-based defect detection for non-destructive structural health monitoring systems. Specifically, this work involves altering the popular-SVD algorithm for dictionary learning by enforcing prior knowledge about the ultrasonic guided wave problem through a physics-based regularization derived from the wave equation. We confer it the name “wave-informed K-SVD.” Training dictionary on data simulated from a fixed string added with noise using both K-SVD and wave-informed K-SVD, we show an improved physical consistency of columns of dictionary matrix with the known modal behavior of different one-dimensional wave simulations is observed.more » « less
- 
            PurposeTo develop a strategy for training a physics‐guided MRI reconstruction neural network without a database of fully sampled data sets. MethodsSelf‐supervised learning via data undersampling (SSDU) for physics‐guided deep learning reconstruction partitions available measurements into two disjoint sets, one of which is used in the data consistency (DC) units in the unrolled network and the other is used to define the loss for training. The proposed training without fully sampled data is compared with fully supervised training with ground‐truth data, as well as conventional compressed‐sensing and parallel imaging methods using the publicly available fastMRI knee database. The same physics‐guided neural network is used for both proposed SSDU and supervised training. The SSDU training is also applied to prospectively two‐fold accelerated high‐resolution brain data sets at different acceleration rates, and compared with parallel imaging. ResultsResults on five different knee sequences at an acceleration rate of 4 shows that the proposed self‐supervised approach performs closely with supervised learning, while significantly outperforming conventional compressed‐sensing and parallel imaging, as characterized by quantitative metrics and a clinical reader study. The results on prospectively subsampled brain data sets, in which supervised learning cannot be used due to lack of ground‐truth reference, show that the proposed self‐supervised approach successfully performs reconstruction at high acceleration rates (4, 6, and 8). Image readings indicate improved visual reconstruction quality with the proposed approach compared with parallel imaging at acquisition acceleration. ConclusionThe proposed SSDU approach allows training of physics‐guided deep learning MRI reconstruction without fully sampled data, while achieving comparable results with supervised deep learning MRI trained on fully sampled data.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    