PurposeTo develop a non‐Cartesian k‐space reconstruction method using self‐calibrated region‐specific interpolation kernels for highly accelerated acquisitions. MethodsIn conventional non‐Cartesian GRAPPA with through‐time GRAPPA (TT‐GRAPPA), the use of region‐specific interpolation kernels has demonstrated improved reconstruction quality in dynamic imaging for highly accelerated acquisitions. However, TT‐GRAPPA requires the acquisition of a large number of separate calibration scans. To reduce the overall imaging time, we propose Self‐calibrated Interpolation of Non‐Cartesian data with GRAPPA (SING) to self‐calibrate region‐specific interpolation kernels from dynamic undersampled measurements. The SING method synthesizes calibration data to adapt to the distinct shape of each region‐specific interpolation kernel geometry, and uses a novel local k‐space regularization through an extension of TT‐GRAPPA. This calibration approach is used to reconstruct non‐Cartesian images at high acceleration rates while mitigating noise amplification. The reconstruction quality of SING is compared with conjugate‐gradient SENSE and TT‐GRAPPA in numerical phantoms and in vivo cine data sets. ResultsIn both numerical phantom and in vivo cine data sets, SING offers visually and quantitatively similar reconstruction quality to TT‐GRAPPA, and provides improved reconstruction quality over conjugate‐gradient SENSE. Furthermore, temporal fidelity in SING and TT‐GRAPPA is similar for the same acceleration rates. G‐factor evaluation over the heart shows that SING and TT‐GRAPPA provide similar noise amplification at moderate and high rates. ConclusionThe proposed SING reconstruction enables significant improvement of acquisition efficiency for calibration data, while matching the reconstruction performance of TT‐GRAPPA.
more »
« less
Cooling Improves Cosmic Microwave Background Map-making when Low-frequency Noise is Large
Abstract In the context of cosmic microwave background data analysis, we study the solution to the equation that transforms scanning data into a map. As originally suggested in “messenger” methods for solving linear systems, we split the noise covariance into uniform and nonuniform parts and adjust their relative weights during the iterative solution. With simulations, we study mock instrumental data with different noise properties, and find that this “cooling” or perturbative approach is particularly effective when there is significant low-frequency noise in the timestream. In such cases, a conjugate gradient algorithm applied to this modified system converges faster and to a higher fidelity solution than the standard conjugate gradient approach. We give an analytic estimate for the parameter that controls how gradually the linear system should change during the course of the solution.
more »
« less
- Award ID(s):
- 1815887
- PAR ID:
- 10360355
- Publisher / Repository:
- DOI PREFIX: 10.3847
- Date Published:
- Journal Name:
- The Astrophysical Journal
- Volume:
- 922
- Issue:
- 2
- ISSN:
- 0004-637X
- Format(s):
- Medium: X Size: Article No. 97
- Size(s):
- Article No. 97
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Recent research has demonstrated that transformers, particularly linear attention models, implicitly execute gradient-descent-like algorithms on data provided in-context during their forward inference step. However, their capability in handling more complex problems remains unexplored. In this paper, we prove that each layer of a linear transformer maintains a weight vector for an implicit linear regression problem and can be interpreted as performing a variant of preconditioned gradient descent. We also investigate the use of linear transformers in a challenging scenario where the training data is corrupted with different levels of noise. Remarkably, we demonstrate that for this problem linear transformers discover an intricate and highly effective optimization algorithm, surpassing or matching in performance many reasonable baselines. We analyze this algorithm and show that it is a novel approach incorporating momentum and adaptive rescaling based on noise levels. Our findings show that even linear transformers possess the surprising ability to discover sophisticated optimization strategies.more » « less
-
Abstract The goal of this study is to develop a new computed tomography (CT) image reconstruction method, aiming at improving the quality of the reconstructed images of existing methods while reducing computational costs. Existing CT reconstruction is modeled by pixel-based piecewise constant approximations of the integral equation that describes the CT projection data acquisition process. Using these approximations imposes a bottleneck model error and results in a discrete system of a large size. We propose to develop a content-adaptive unstructured grid (CAUG) based regularized CT reconstruction method to address these issues. Specifically, we design a CAUG of the image domain to sparsely represent the underlying image, and introduce a CAUG-based piecewise linear approximation of the integral equation by employing a collocation method. We further apply a regularization defined on the CAUG for the resulting ill-posed linear system, which may lead to a sparse linear representation for the underlying solution. The regularized CT reconstruction is formulated as a convex optimization problem, whose objective function consists of a weighted least square norm based fidelity term, a regularization term and a constraint term. Here, the corresponding weighted matrix is derived from the simultaneous algebraic reconstruction technique (SART). We then develop a SART-type preconditioned fixed-point proximity algorithm to solve the optimization problem. Convergence analysis is provided for the resulting iterative algorithm. Numerical experiments demonstrate the superiority of the proposed method over several existing methods in terms of both suppressing noise and reducing computational costs. These methods include the SART without regularization and with the quadratic regularization, the traditional total variation (TV) regularized reconstruction method and the TV superiorized conjugate gradient method on the pixel grid.more » « less
-
The ability to detect sparse signals from noisy, high-dimensional data is a top priority in modern science and engineering. It is well known that a sparse solution of the linear system A ρ = b 0 can be found efficiently with an ℓ 1 -norm minimization approach if the data are noiseless. However, detection of the signal from data corrupted by noise is still a challenging problem as the solution depends, in general, on a regularization parameter with optimal value that is not easy to choose. We propose an efficient approach that does not require any parameter estimation. We introduce a no-phantom weight τ and the Noise Collector matrix C and solve an augmented system A ρ + C η = b 0 + e , where e is the noise. We show that the ℓ 1 -norm minimal solution of this system has zero false discovery rate for any level of noise, with probability that tends to one as the dimension of b 0 increases to infinity. We obtain exact support recovery if the noise is not too large and develop a fast Noise Collector algorithm, which makes the computational cost of solving the augmented system comparable with that of the original one. We demonstrate the effectiveness of the method in applications to passive array imaging.more » « less
-
Zhang, Qichun (Ed.)We develop a general framework for state estimation in systems modeled with noise-polluted continuous time dynamics and discrete time noisy measurements. Our approach is based on maximum likelihood estimation and employs the calculus of variations to derive optimality conditions for continuous time functions. We make no prior assumptions on the form of the mapping from measurements to state-estimate or on the distributions of the noise terms, making the framework more general than Kalman filtering/smoothing where this mapping is assumed to be linear and the noises Gaussian. The optimal solution that arises is interpreted as a continuous time spline, the structure and temporal dependency of which is determined by the system dynamics and the distributions of the process and measurement noise. Similar to Kalman smoothing, the optimal spline yields increased data accuracy at instants when measurements are taken, in addition to providing continuous time estimates outside the measurement instances. We demonstrate the utility and generality of our approach via illustrative examples that render both linear and nonlinear data filters depending on the particular system. Application of the proposed approach to a Monte Carlo simulation exhibits significant performance improvement in comparison to a common existing method.more » « less
An official website of the United States government
