A linear principal minor polynomial or lpm polynomial is a linear combination of principal minors of a symmetric matrix. By restricting to the diagonal, lpm polynomials are in bijection with multiaffine polynomials. We show that this establishes a onetoone correspondence between homogeneous multiaffine stable polynomials and PSDstable lpm polynomials. This yields new construction techniques for hyperbolic polynomials and allows us to find an explicit degree 3 hyperbolic polynomial in six variables some of whose Rayleigh differences are not sums of squares. We further generalize the wellknown Fisher–Hadamard and Koteljanskii inequalities from determinants to PSDstable lpm polynomials. We investigate the relationship between the associated hyperbolicity cones and conjecture a relationship between the eigenvalues of a symmetric matrix and the values of certain lpm polynomials evaluated at that matrix. We refer to this relationship as spectral containment.
more » « less Award ID(s):
 1901950
 NSFPAR ID:
 10470244
 Publisher / Repository:
 Oxford University Press
 Date Published:
 Journal Name:
 International Mathematics Research Notices
 ISSN:
 10737928
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

null (Ed.)We answer a question of K. Mulmuley. Efremenko et al. (Math. Comp., 2018) have shown that the method of shifted partial derivatives cannot be used to separate the padded permanent from the determinant. Mulmuley asked if this “nogo” result could be extended to a model without padding. We prove this is indeed the case using the iterated matrix multiplication polynomial. We also provide several examples of polynomials with maximal space of partial derivatives, including the complete symmetric polynomials. We apply Koszul flattenings to these polynomials to have the first explicit sequence of polynomials with symmetric border rank lower bounds higher than the bounds attainable via partial derivatives.more » « less

Abstract Let $k \leq n$ be positive integers, and let $X_n = (x_1, \dots , x_n)$ be a list of $n$ variables. The Boolean product polynomial$B_{n,k}(X_n)$ is the product of the linear forms $\sum _{i \in S} x_i$, where $S$ ranges over all $k$element subsets of $\{1, 2, \dots , n\}$. We prove that Boolean product polynomials are Schur positive. We do this via a new method of proving Schur positivity using vector bundles and a symmetric function operation we call Chern plethysm. This gives a geometric method for producing a vast array of Schur positive polynomials whose Schur positivity lacks (at present) a combinatorial or representation theoretic proof. We relate the polynomials $B_{n,k}(X_n)$ for certain $k$ to other combinatorial objects including derangements, positroids, alternating sign matrices, and reverse flagged fillings of a partition shape. We also relate $B_{n,n1}(X_n)$ to a bigraded action of the symmetric group ${\mathfrak{S}}_n$ on a divergence free quotient of superspace.

We complete Dyson’s dream by cementing the links between symmetric spaces and classical random matrix ensembles. Previous work has focused on a onetoone correspondence between symmetric spaces and many but not all of the classical random matrix ensembles. This work shows that we can completely capture all of the classical random matrix ensembles from Cartan’s symmetric spaces through the use of alternative coordinate systems. In the end, we have to let go of the notion of a onetoone correspondence. We emphasize that the KAK decomposition traditionally favored by mathematicians is merely one coordinate system on the symmetric space, albeit a beautiful one. However, other matrix factorizations, especially the generalized singular value decomposition from numerical linear algebra, reveal themselves to be perfectly valid coordinate systems that one symmetric space can lead to many classical random matrix theories. We establish the connection between this numerical linear algebra viewpoint and the theory of generalized Cartan decompositions. This, in turn, allows us to produce yet more random matrix theories from a single symmetric space. Yet, again, these random matrix theories arise from matrix factorizations, though ones that we are not aware have appeared in the literature.more » « less

Abstract We prove a complex polynomial plank covering theorem for not necessarily homogeneous polynomials. As the consequence of this result, we extend the complex plank theorem of Ball to the case of planks that are not necessarily centrally symmetric and not necessarily round. We also prove a weaker version of the spherical polynomial plank covering conjecture for planks of different widths.

Recently, there has been significant progress in understanding the convergence and generalization properties of gradientbased methods for training overparameterized learning models. However, many aspects including the role of small random initialization and how the various parameters of the model are coupled during gradientbased updates to facilitate good generalization, remain largely mysterious. A series of recent papers have begun to study this role for nonconvex formulations of symmetric Positive SemiDefinite (PSD) matrix sensing problems which involve reconstructing a lowrank PSD matrix from a few linear measurements. The underlying symmetry/PSDness is crucial to existing convergence and generalization guarantees for this problem. In this paper, we study a general overparameterized lowrank matrix sensing problem where one wishes to reconstruct an asymmetric rectangular lowrank matrix from a few linear measurements. We prove that an overparameterized model trained via factorized gradient descent converges to the lowrank matrix generating the measurements. We show that in this setting, factorized gradient descent enjoys two implicit properties: (1) coupling of the trajectory of gradient descent where the factors are coupled in various ways throughout the gradient update trajectory and (2) an algorithmic regularization property where the iterates show a propensity towards lowrank models despite the overparameterized nature of the factorized model. These two implicit properties in turn allow us to show that the gradient descent trajectory from small random initialization moves towards solutions that are both globally optimal and generalize well.more » « less