skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Maximizers of nonlocal interactions of Wasserstein Type
We characterize the maximizers of a functional that involves the minimization of the Wasserstein distance between sets of equal volume. We prove that balls are the only maximizers by combining a symmetrization-by-reflection technique with the uniqueness of optimal transport plans. Further, in one dimension, we provide a sharp quantitative refinement of this maximality result.  more » « less
Award ID(s):
2306962
PAR ID:
10584648
Author(s) / Creator(s):
; ;
Publisher / Repository:
EDP Sciences
Date Published:
Journal Name:
ESAIM: Control, Optimisation and Calculus of Variations
Volume:
30
ISSN:
1292-8119
Page Range / eLocation ID:
80
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. An inequality of Brascamp-Lieb-Luttinger and of Rogers states that among subsets of Euclidean space R d \mathbb {R}^d of specified Lebesgue measures, (tuples of) balls centered at the origin are maximizers of certain functionals defined by multidimensional integrals. For d > 1 d>1 , this inequality only applies to functionals invariant under a diagonal action of Sl ⁡ ( d ) \operatorname {Sl}(d) . We investigate functionals of this type, and their maximizers, in perhaps the simplest situation in which Sl ⁡ ( d ) \operatorname {Sl}(d) invariance does not hold. Assuming a more limited symmetry encompassing dilations but not rotations, we show under natural hypotheses that maximizers exist, and, moreover, that there exist distinguished maximizers whose structure reflects this limited symmetry. For small perturbations of the Sl ⁡ ( d ) \operatorname {Sl}(d) –invariant framework we show that these distinguished maximizers are strongly convex sets with infinitely differentiable boundaries. It is shown that in the absence of partial symmetry, maximizers fail to exist for certain arbitrarily small perturbations of Sl ⁡ ( d ) \operatorname {Sl}(d) –invariant structures. 
    more » « less
  2. Abstract When selecting from too many options (i.e., choice overload),maximizers(people who search exhaustively to make decisions that areoptimal) report more negative post‐decisional evaluations of their choices than dosatisficers(people who search minimally to make decisions that aresufficient). Although ample evidence exists for differences in responses after‐the‐fact, little is known about possible divergences in maximizers’ and satisficers’ experiencesduringchoice overload. Thus, using the biopsychosocial model of challenge/threat, we examined 128 participants’ cardiovascular responses as they actively made a selection from many options. Specifically, we focused on cardiovascular responses assessing the degree to which individuals (a) viewed their decisions as valuable/important and (b) viewed themselves as capable (vs. incapable) of making a good choice. Although we found no differences in terms of the value individuals placed on their decisions (i.e., cardiovascular responses of task engagement), satisficers—compared to maximizers—exhibited cardiovascular responses consistent with feeling less capable of making their choice (i.e., greater relative threat). The current work provides a novel investigation of the nature of differences in maximizers’/satisficers’ momentary choice overload experiences, suggesting insight into why they engage in such distinct search behaviors. 
    more » « less
  3. Can deep convolutional neural networks (CNNs) for image classification be interpreted as utility maximizers with information costs? By performing set-valued system identifica- tion for Bayesian decision systems, we demonstrate that deep CNNs behave equivalently (in terms of necessary and sufficient conditions) to rationally inattentive Bayesian utility maximizers, a generative model used extensively in economics for human decision-making. Our claim is based on approximately 500 numerical experiments on 5 widely used neural network archi- tectures. The parameters of the resulting interpretable model are computed efficiently via convex feasibility algorithms. As a practical application, we also illustrate how the reconstructed interpretable model can predict the classification performance of deep CNNs with high accuracy. The theoretical foundation of our approach lies in Bayesian revealed preference studied in micro-economics. All our results are on GitHub and completely reproducible. 
    more » « less
  4. Abstract More than three decades ago, Boyd and Balakrishnan established a regularity result for the two-norm of a transfer function at maximizers. Their result extends easily to the statement that the maximum eigenvalue of a univariate real analytic Hermitian matrix family is twice continuously differentiable, with Lipschitz second derivative, at all local maximizers, a property that is useful in several applications that we describe. We also investigate whether this smoothness property extends to max functions more generally. We show that the pointwise maximum of a finite set ofq-times continuously differentiable univariate functions must have zero derivative at a maximizer for$$q=1$$ q = 1 , but arbitrarily close to the maximizer, the derivative may not be defined, even when$$q=3$$ q = 3 and the maximizer is isolated. 
    more » « less
  5. We study a deep learning inspired formulation for the blind demodulation problem, which is the task of recovering two unknown vectors from their entrywise multiplication. We consider the case where the unknown vectors are in the range of known deep generative models, G(1):R^n→R^l and G(2):R^p→R^l. In the case when the networks corresponding to the generative models are expansive, the weight matrices are random and the dimension of the unknown vectors satisfy l=Omega(n^2+p^2), up to log factors, we show that the empirical risk objective has a favorable landscape for optimization. That is, the objective function has a descent direction at every point outside of a small neighborhood around four hyperbolic curves. We also characterize the local maximizers of the empirical risk objective and, hence, show that there does not exist any other stationary points outside of these neighborhood around four hyperbolic curves and the set of local maximizers. We also implement a gradient descent scheme inspired by the geometry of the landscape of the objective function. In order to converge to a global minimizer, this gradient descent scheme exploits the fact that exactly one of the hyperbolic curve corresponds to the global minimizer, and thus points near this hyperbolic curve have a lower objective value than points close to the other spurious hyperbolic curves. We show that this gradient descent scheme can effectively remove distortions synthetically introduced to the MNIST dataset. 
    more » « less