skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on July 1, 2026

Title: New lower bounds for the Schur-Siegel-Smyth trace problem
We derive and implement a new way to find lower bounds on the smallest limiting trace-to-degree ratio of totally positive algebraic integers and improve the previously best known bound to 1.80203. Our method adds new constraints to Smyth’s linear programming method to decrease the number of variables required in the new problem of interest. This allows for faster convergence recovering Schur’s bound in the simplest case and Siegel’s bound in the second simplest case of our new family of bounds. We also prove the existence of a unique optimal solution to our newly phrased problem and express the optimal solution in terms of polynomials. Lastly, we solve this new problem numerically with a gradient descent algorithm to attain the new bound 1.80203.  more » « less
Award ID(s):
2401242
PAR ID:
10596082
Author(s) / Creator(s):
; ;
Publisher / Repository:
NEW LOWER BOUNDS FOR THE SCHUR-SIEGEL-SMYTH TRACE PROBLEM
Date Published:
Journal Name:
Mathematics of Computation
Volume:
94
Issue:
354
ISSN:
0025-5718
Page Range / eLocation ID:
2005 to 2040
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Summary A method is developed to numerically solve chance constrained optimal control problems. The chance constraints are reformulated as nonlinear constraints that retain the probability properties of the original constraint. The reformulation transforms the chance constrained optimal control problem into a deterministic optimal control problem that can be solved numerically. The new method developed in this paper approximates the chance constraints using Markov Chain Monte Carlo sampling and kernel density estimators whose kernels have integral functions that bound the indicator function. The nonlinear constraints resulting from the application of kernel density estimators are designed with bounds that do not violate the bounds of the original chance constraint. The method is tested on a nontrivial chance constrained modification of a soft lunar landing optimal control problem and the results are compared with results obtained using a conservative deterministic formulation of the optimal control problem. Additionally, the method is tested on a complex chance constrained unmanned aerial vehicle problem. The results show that this new method can be used to reliably solve chance constrained optimal control problems. 
    more » « less
  2. Abstract We develop a convergence analysis for the simplest finite element method for a model linear-quadratic elliptic distributed optimal control problem with pointwise control and state constraints under minimal assumptions on the constraint functions.We then derive the generalized Karush–Kuhn–Tucker conditions for the solution of the optimal control problem from the convergence results of the finite element method and the Karush–Kuhn–Tucker conditions for the solutions of the discrete problems. 
    more » « less
  3. This paper studies a two-user state-dependent Gaus- sian multiple-access channel with state noncausally known at one encoder. Two new outer bounds on the capacity region are derived, which improve uniformly over the best known (genie- aided) outer bound. The two corner points of the capacity region as well as the sum rate capacity are established, and it is shown that a single-letter solution is adequate to achieve both the corner points and the sum rate capacity. Furthermore, the full capacity region is characterized in situations in which the sum rate capacity is equal to the capacity of the helper problem. The proof exploits the optimal-transportation idea of Polyanskiy and Wu (which was used previously to establish an outer bound on the capacity region of the interference channel) and the worst- case Gaussian noise result for the case in which the input and the noise are dependent. 
    more » « less
  4. In this work, we describe a generic approach to show convergence with high probability for both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous works for convex optimization, either the convergence is only in expectation or the bound depends on the diameter of the domain. Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution. The algorithms use step sizes analogous to the standard settings and are universal to Lipschitz functions, smooth functions, and their linear combinations. The method can be applied to the non-convex case. We demonstrate an $$O((1+\sigma^{2}\log(1/\delta))/T+\sigma/\sqrt{T})$$ convergence rate when the number of iterations $$T$$ is known and an $$O((1+\sigma^{2}\log(T/\delta))/\sqrt{T})$$ convergence rate when $$T$$ is unknown for SGD, where $$1-\delta$$ is the desired success probability. These bounds improve over existing bounds in the literature. We also revisit AdaGrad-Norm (Ward et al., 2019) and show a new analysis to obtain a high probability bound that does not require the bounded gradient assumption made in previous works. The full version of our paper contains results for the standard per-coordinate AdaGrad. 
    more » « less
  5. In this work, we describe a generic approach to show convergence with high probability for both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous works for convex optimization, either the convergence is only in expectation or the bound depends on the diameter of the domain. Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution. The algorithms use step sizes analogous to the standard settings and are universal to Lipschitz functions, smooth functions, and their linear combinations. The method can be applied to the non-convex case. We demonstrate an $$O((1+\sigma^{2}\log(1/\delta))/T+\sigma/\sqrt{T})$$ convergence rate when the number of iterations $$T$$ is known and an $$O((1+\sigma^{2}\log(T/\delta))/\sqrt{T})$$ convergence rate when $$T$$ is unknown for SGD, where $$1-\delta$$ is the desired success probability. These bounds improve over existing bounds in the literature. We also revisit AdaGrad-Norm \cite{ward2019adagrad} and show a new analysis to obtain a high probability bound that does not require the bounded gradient assumption made in previous works. The full version of our paper contains results for the standard per-coordinate AdaGrad. 
    more » « less