Abstract The stein variational gradient descent (SVGD) algorithm is a deterministic particle method for sampling. However, a mean-field analysis reveals that the gradient flow corresponding to the SVGD algorithm (i.e., the Stein Variational Gradient Flow) only provides a constant-order approximation to the Wasserstein gradient flow corresponding to the KL-divergence minimization. In this work, we propose the Regularized Stein Variational Gradient Flow, which interpolates between the Stein Variational Gradient Flow and the Wasserstein gradient flow. We establish various theoretical properties of the Regularized Stein Variational Gradient Flow (and its time-discretization) including convergence to equilibrium, existence and uniqueness of weak solutions, and stability of the solutions. We provide preliminary numerical evidence of the improved performance offered by the regularization.
more »
« less
Inverse Design of Nonlocal Metasurfaces Using Augmented Partial Factorization
Topology optimization of nonlocal metasurfaces requires the objective-function gradient considering all angles of interest. We generalize the recent “augmented partial factorization” method to compute such gradient efficiently and inverse design a broad-angle metasurface beam splitter.
more »
« less
- Award ID(s):
- 2146021
- PAR ID:
- 10497743
- Publisher / Repository:
- Optica Publishing Group
- Date Published:
- ISBN:
- 978-1-957171-25-8
- Page Range / eLocation ID:
- FW4H.5
- Format(s):
- Medium: X
- Location:
- San Jose, CA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Gradient structures exist ubiquitously in nature and are increasingly being introduced in engineering. However, understanding structural gradient–related mechanical behaviors in all gradient structures, including those in engineering materials, has been challenging. We explored the mechanical performance of a gradient nanotwinned structure with highly tunable structural gradients in pure copper. A large structural gradient allows for superior work hardening and strength that can exceed those of the strongest component of the gradient structure. We found through systematic experiments and atomistic simulations that this unusual behavior is afforded by a unique patterning of ultrahigh densities of dislocations in the grain interiors. These observations not only shed light on gradient structures, but may also indicate a promising route for improving the mechanical properties of materials through gradient design.more » « less
-
Abstract PurposeDiffusion encoding gradient waveforms can impartintra‐voxelandinter‐voxeldephasing owing to bulk motion, limiting achievable signal‐to‐noise and complicating multishot acquisitions. In this study, we characterize improvements in phase consistency via gradient moment nulling of diffusion encoding waveforms. MethodsHealthy volunteers received neuro () and cardiac () MRI. Three gradient moment nulling levels were evaluated: compensation for position (), position + velocity (), and position + velocity + acceleration (). Three experiments were completed: (Exp‐1) Fixed Trigger Delay Neuro DWI; (Exp‐2) Mixed Trigger Delay Neuro DWI; and (Exp‐3) Fixed Trigger Delay Cardiac DWI. Significant differences () of the temporal phase SD between repeated acquisitions and the spatial phase gradient across a given image were assessed. Resultsmoment nulling was a reference for all measures. In Exp‐1, temporal phase SD for diffusion encoding was significantly reduced with (35% oft‐tests) and (68% oft‐tests). The spatial phase gradient was reduced in 23% oft‐tests for and 2% of cases for . In Exp‐2, temporal phase SD significantly decreased with gradient moment nulling only for (83% oft‐tests), but spatial phase gradient significantly decreased with only (50% oft‐tests). In Exp‐3, gradient moment nulling significantly reduced temporal phase SD and spatial phase gradients (100% oft‐tests), resulting in less signal attenuation and more accurate ADCs. ConclusionWe characterized gradient moment nulling phase consistency for DWI. UsingM1for neuroimaging andM1 + M2for cardiac imaging minimized temporal phase SDs and spatial phase gradients.more » « less
-
The classical Langevin Monte Carlo method looks for samples from a target distribution by descending the samples along the gradient of the target distribution. The method enjoys a fast convergence rate. However, the numerical cost is sometimes high because each iteration requires the computation of a gradient. One approach to eliminate the gradient computation is to employ the concept of "ensemble." A large number of particles are evolved together so the neighboring particles provide gradient information to each other. In this article, we discuss two algorithms that integrate the ensemble feature into LMC, and the associated properties.In particular, we find that if one directly surrogates the gradient using the ensemble approximation, the algorithm, termed Ensemble Langevin Monte Carlo, is unstable due to a high variance term. If the gradients are replaced by the ensemble approximations only in a constrained manner, to protect from the unstable points, the algorithm, termed Constrained Ensemble Langevin Monte Carlo, resembles the classical LMC up to an ensemble error but removes most of the gradient computation.more » « less
-
Abstract We prove, under mild conditions, the convergence of a Riemannian gradient descent method for a hyperbolic neural network regression model, both in batch gradient descent and stochastic gradient descent. We also discuss a Riemannian version of the Adam algorithm. We show numerical simulations of these algorithms on various benchmarks.more » « less
An official website of the United States government
