skip to main content


Title: Searching for Fast Demosaicking Algorithms
We present a method to automatically synthesize efficient, high-quality demosaicking algorithms, across a range of computational budgets, given a loss function and training data. It performs a multi-objective, discrete-continuous optimization which simultaneously solves for the program structure and parameters that best tradeoff computational cost and image quality. We design the method to exploit domain-specific structure for search efficiency. We apply it to several tasks, including demosaicking both Bayer and Fuji X-Trans color filter patterns, as well as joint demosaicking and super-resolution. In a few days on 8 GPUs, it produces a family of algorithms that significantly improves image quality relative to the prior state-of-the-art across a range of computational budgets from 10 s to 1000 s of operations per pixel (1 dB–3 dB higher quality at the same cost, or 8.5–200× higher throughput at same or better quality). The resulting programs combine features of both classical and deep learning-based demosaicking algorithms into more efficient hybrid combinations, which are bandwidth-efficient and vectorizable by construction. Finally, our method automatically schedules and compiles all generated programs into optimized SIMD code for modern processors.  more » « less
Award ID(s):
1723445 2217878
NSF-PAR ID:
10342459
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
ACM Transactions on Graphics
Volume:
41
Issue:
5
ISSN:
0730-0301
Page Range / eLocation ID:
1 to 18
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. For a given PDE problem, three main factors affect the accuracy of FEM solutions: basis order, mesh resolution, and mesh element quality. The first two factors are easy to control, while controlling element shape quality is a challenge, with fundamental limitations on what can be achieved. We propose to use p-refinement (increasing element degree) to decouple the approximation error of the finite element method from the domain mesh quality for elliptic PDEs. Our technique produces an accurate solution even on meshes with badly shaped elements, with a slightly higher running time due to the higher cost of high-order elements. We demonstrate that it is able to automatically adapt the basis to badly shaped elements, ensuring an error consistent with high-quality meshing, without any per-mesh parameter tuning. Our construction reduces to traditional fixed-degree FEM methods on high-quality meshes with identical performance. Our construction decreases the burden on meshing algorithms, reducing the need for often expensive mesh optimization and automatically compensates for badly shaped elements, which are present due to boundary con- straints or limitations of current meshing methods. By tackling mesh gen- eration and finite element simulation jointly, we obtain a pipeline that is both more efficient and more robust than combinations of existing state of the art meshing and FEM algorithms. 
    more » « less
  2. Abstract Purpose

    Parallel imaging and compressed sensing reconstructions of large MRI datasets often have a prohibitive computational cost that bottlenecks clinical deployment, especially for three‐dimensional (3D) non‐Cartesian acquisitions. One common approach is to reduce the number of coil channels actively used during reconstruction as in coil compression. While effective for Cartesian imaging, coil compression inherently loses signal energy, producing shading artifacts that compromise image quality for 3D non‐Cartesian imaging. We propose coil sketching, a general and versatile method for computationally‐efficient iterative MR image reconstruction.

    Theory and Methods

    We based our method on randomized sketching algorithms, a type of large‐scale optimization algorithms well established in the fields of machine learning and big data analysis. We adapt the sketching theory to the MRI reconstruction problem via a structured sketching matrix that, similar to coil compression, considers high‐energy virtual coils obtained from principal component analysis. But, unlike coil compression, it also considers random linear combinations of the remaining low‐energy coils, effectively leveraging information from all coils.

    Results

    First, we performed ablation experiments to validate the sketching matrix design on both Cartesian and non‐Cartesian datasets. The resulting design yielded both improved computatioanal efficiency and preserved signal‐to‐noise ratio (SNR) as measured by the inverse g‐factor. Then, we verified the efficacy of our approach on high‐dimensional non‐Cartesian 3D cones datasets, where coil sketching yielded up to three‐fold faster reconstructions with equivalent image quality.

    Conclusion

    Coil sketching is a general and versatile reconstruction framework for computationally fast and memory‐efficient reconstruction.

     
    more » « less
  3. This paper proposes an automatic parameter selection framework for optimizing the performance of parameter-dependent regularized reconstruction algorithms. The proposed approach exploits a convolutional neural network for direct estimation of the regularization parameters from the acquired imaging data. This method can provide very reliable parameter estimates in a computationally efficient way. The effectiveness of the proposed approach is verified on transform-learning-based magnetic resonance image reconstructions of two different publicly available datasets. This experiment qualitatively and quantitatively measures improvement in image reconstruction quality using the proposed parameter selection strategy versus both existing parameter selection solutions and a fully deep-learning reconstruction with limited training data. Based on the experimental results, the proposed method improves average reconstructed image peak signal-to-noise ratio by a dB or more versus all competing methods in both brain and knee datasets, over a range of subsampling factors and input noise levels. 
    more » « less
  4. ABSTRACT The recent demonstration of a real-time direct imaging radio interferometry correlator represents a new capability in radio astronomy. However, wide-field imaging with this method is challenging since wide-field effects and array non-coplanarity degrade image quality if not compensated for. Here, we present an alternative direct imaging correlation strategy using a direct Fourier transform (DFT), modelled as a linear operator facilitating a matrix multiplication between the DFT matrix and a vector of the electric fields from each antenna. This offers perfect correction for wide field and non-coplanarity effects. When implemented with data from the Long Wavelength Array (LWA), it offers comparable computational performance to previously demonstrated direct imaging techniques, despite having a theoretically higher floating point cost. It also has additional benefits, such as imaging sparse arrays and control over which sky coordinates are imaged, allowing variable pixel placement across an image. It is in practice a highly flexible and efficient method of direct radio imaging when implemented on suitable arrays. A functioning electric field direct imaging architecture using the DFT is presented, alongside an exploration of techniques for wide-field imaging similar to those in visibility-based imaging, and an explanation of why they do not fit well to imaging directly with the digitized electric field data. The DFT imaging method is demonstrated on real data from the LWA telescope, alongside a detailed performance analysis, as well as an exploration of its applicability to other arrays. 
    more » « less
  5. Diffractive achromats (DAs) promise ultra-thin and light-weight form factors for full-color computational imaging systems. However, designing DAs with the optimal optical transfer function (OTF) distribution suitable for image reconstruction algorithms has been a difficult challenge. Emerging end-to-end optimization paradigms of diffractive optics and processing algorithms have achieved impressive results, but these approaches require immense computational resources and solve non-convex inverse problems with millions of parameters. Here, we propose a learned rotational symmetric DA design using a concentric ring decomposition that reduces the computational complexity and memory requirements by one order of magnitude compared with conventional end-to-end optimization procedures, which simplifies the optimization significantly. With this approach, we realize the joint learning of a DA with an aperture size of 8 mm and an image recovery neural network, i.e., Res-Unet, in an end-to-end manner across the full visible spectrum (429–699 nm). The peak signal-to-noise ratio of the recovered images of our learned DA is 1.3 dB higher than that of DAs designed by conventional sequential approaches. This is because the learned DA exhibits higher amplitudes of the OTF at high frequencies over the full spectrum. We fabricate the learned DA using imprinting lithography. Experiments show that it resolves both fine details and color fidelity of diverse real-world scenes under natural illumination. The proposed design paradigm paves the way for incorporating DAs for thinner, lighter, and more compact full-spectrum imaging systems.

     
    more » « less