Intensity Gradients Technique: Synergy with Velocity Gradients and Polarization Studies
                        
                    - Award ID(s):
- 1816234
- PAR ID:
- 10191894
- Date Published:
- Journal Name:
- The Astrophysical Journal
- Volume:
- 886
- Issue:
- 1
- ISSN:
- 1538-4357
- Page Range / eLocation ID:
- 17
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            Many probabilistic modeling problems in machine learning use gradient-based optimization in which the objective takes the form of an expectation. These problems can be challenging when the parameters to be optimized determine the probability distribution under which the expectation is being taken, as the na\"ive Monte Carlo procedure is not differentiable. Reparameterization gradients make it possible to efficiently perform optimization of these Monte Carlo objectives by transforming the expectation to be differentiable, but the approach is typically limited to distributions with simple forms and tractable normalization constants. Here we describe how to differentiate samples from slice sampling to compute \textit{slice sampling reparameterization gradients}, enabling a richer class of Monte Carlo objective functions to be optimized. Slice sampling is a Markov chain Monte Carlo algorithm for simulating samples from probability distributions; it only requires a density function that can be evaluated point-wise up to a normalization constant, making it applicable to a variety of inference problems and unnormalized models. Our approach is based on the observation that when the slice endpoints are known, the sampling path is a deterministic and differentiable function of the pseudo-random variables, since the algorithm is rejection-free. We evaluate the method on synthetic examples and apply it to a variety of applications with reparameterization of unnormalized probability distributions.more » « less
- 
            Understanding and predicting turbulent flow phenomena remain a challenge for both theory and applications. The nonlinear and nonlocal character of small-scale turbulence can be comprehensively described in terms of the velocity gradients, which determine fundamental quantities like dissipation, enstrophy, and the small-scale topology of turbulence. The dynamical equation for the velocity gradient succinctly encapsulates the nonlinear physics of turbulence; it offers an intuitive description of a host of turbulence phenomena and enables establishing connections between turbulent dynamics, statistics, and flow structure. The consideration of filtered velocity gradients enriches this view to express the multiscale aspects of nonlinearity and flow structure in a formulation directly applicable to large-eddy simulations. Driven by theoretical advances together with growing computational and experimental capabilities, recent activities in this area have elucidated key aspects of turbulence physics and advanced modeling capabilities.more » « less
- 
            A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint that is, up to small constant factors, no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    