Image reconstructions involving neural networks (NNs) are generally non-iterative and computationally efficient. However, without analytical expression describing the reconstruction process, the computation of noise propagation becomes difficult. Automated differentiation allows rapid computation of derivatives without an analytical expression. In this work, the feasibility of computing noise propagation with automated differentiation was investigated. The noise propagation of image reconstruction by End-to-end variational-neural-network was estimated using automated differentiation and compared with Monte-Carlo simulation. The root-mean-square error (RMSE) map showed great agreement between automated differentiation and Monte-Carlo simulation over a wide range of SNRs.
more »
« less
Estimating Noise Propagation of Neural Network Based Image Reconstruction Using Automated Differentiation
Image reconstructions involving neural networks (NNs) are generally non-iterative and computationally efficient. However, without analytical expression describing the reconstruction process, the computation of noise propagation becomes difficult. Automated differentiation allows rapid computation of derivatives without an analytical expression. In this work, the feasibility of computing noise propagation with automated differentiation was investigated. The noise propagation of image reconstruction by the End-to-end variational-neural-network was estimated using automated differentiation and compared with Monte-Carlo simulation. The root-mean-square error (RMSE) map showed great agreement between automated differentiation and Monte-Carlo simulation over a wide range of SNRs.
more »
« less
- PAR ID:
- 10392299
- Date Published:
- Journal Name:
- ISMRM
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We examine the effects of imperfect phase estimation of a reference signal on the bit error rate and mutual information over a communication channel influenced by fading and thermal noise. The Two-Wave Diffuse-Power (TWDP) model is utilized for statistical characterization of propagation environment where there are two dominant line-of-sight components together with diffuse ones. We derive novel analytical expression of the Fourier series for probability density function arising from the composite received signal phase. Further, the expression for the bit error rate is presented and numerically evaluated. We develop efficient analytical, numerical and simulation methods for estimating the value of the error floor and identifying the range of acceptable signal-to-noise ratio (SNR) values in cases when the floor is present during the detection of multilevel phase-shift keying (PSK) signals. In addition, we use Monte Carlo simulations in order to evaluate the mutual information for modulation orders two, four and eight, and identify its dependence on receiver hardware imperfections under the given channel conditions. Our results expose direct correspondence between bit error rate and mutual information value on one side, and the parameters of TWDP channel, SNR and phase noise standard deviation on the other side. The results illustrate that the error floor values are strongly influenced by the phase noise when signals propagate over a TWDP channel. In addition, the phase noise considerably affects the mutual information.more » « less
-
Blohm, Gunnar (Ed.)Neural circuits consist of many noisy, slow components, with individual neurons subject to ion channel noise, axonal propagation delays, and unreliable and slow synaptic transmission. This raises a fundamental question: how can reliable computation emerge from such unreliable components? A classic strategy is to simply average over a population ofNweakly-coupled neurons to achieve errors that scale as . But more interestingly, recent work has introduced networks of leaky integrate-and-fire (LIF) neurons that achieve coding errors that scalesuperclassicallyas 1/Nby combining the principles of predictive coding and fast and tight inhibitory-excitatory balance. However, spike transmission delays preclude such fast inhibition, and computational studies have observed that such delays can cause pathological synchronization that in turn destroys superclassical coding performance. Intriguingly, it has also been observed in simulations that noise can actuallyimprovecoding performance, and that there exists some optimal level of noise that minimizes coding error. However, we lack a quantitative theory that describes this fascinating interplay between delays, noise and neural coding performance in spiking networks. In this work, we elucidate the mechanisms underpinning this beneficial role of noise by derivinganalyticalexpressions for coding error as a function of spike propagation delay and noise levels in predictive coding tight-balance networks of LIF neurons. Furthermore, we compute the minimal coding error and the associated optimal noise level, finding that they grow as power-laws with the delay. Our analysis reveals quantitatively how optimal levels of noise can rescue neural coding performance in spiking neural networks with delays by preventing the build up of pathological synchrony without overwhelming the overall spiking dynamics. This analysis can serve as a foundation for the further study of precise computation in the presence of noise and delays in efficient spiking neural circuits.more » « less
-
Dual-phase liquid xenon time projection chambers (LXeTPC) have been successfully applied in rare event searches in astroparticle physics because of their ability to reach low backgrounds and detect small scintillation signals with photosensors. Accurate modeling of optical properties is essential for reconstructing particle interactions within these detectors as well as for developing data selection criteria. This is commonly achieved with discretized maps derived from Monte Carlo simulation or approximated with empirical analytical models. In this work, we employ a novel approach to this using a neural network trained with a Poisson log-likelihood ratio loss to model the mapping from light source location to the expected light intensity for each photosensor. We demonstrate its effectiveness by integrating it into a likelihood fitter for position reconstruction, simultaneously providing insights into the uncertainty associated with the reconstructed position.more » « less
-
Abstract The sparse interferometric coverage of the Event Horizon Telescope (EHT) poses a significant challenge for both reconstruction and model fitting of black hole images.PRIMOis a new principal components analysis-based algorithm for image reconstruction that uses the results of high-fidelity general relativistic, magnetohydrodynamic simulations of low-luminosity accretion flows as a training set. This allows the reconstruction of images that are consistent with the interferometric data and that live in the space of images that is spanned by the simulations.PRIMOfollows Monte Carlo Markov Chains to fit a linear combination of principal components derived from an ensemble of simulated images to interferometric data. We show thatPRIMOcan efficiently and accurately reconstruct synthetic EHT data sets for several simulated images, even when the simulation parameters are significantly different from those of the image ensemble that was used to generate the principal components. The resulting reconstructions achieve resolution that is consistent with the performance of the array and do not introduce significant biases in image features such as the diameter of the ring of emission.more » « less
An official website of the United States government

