This paper is concerned with developing an efficient numerical algorithm for the fast implementation of the sparse grid method for computing the d-dimensional integral of a given function. The new algorithm, called the MDI-SG (multilevel dimension iteration sparse grid) method, implements the sparse grid method based on a dimension iteration/reduction procedure. It does not need to store the integration points, nor does it compute the function values independently at each integration point; instead, it reuses the computation for function evaluations as much as possible by performing the function evaluations at all integration points in a cluster and iteratively along coordinate directions. It is shown numerically that the computational complexity (in terms of CPU time) of the proposed MDI-SG method is of polynomial order O(d3Nb)(b≤2) or better, compared to the exponential order O(N(logN)d−1) for the standard sparse grid method, where N denotes the maximum number of integration points in each coordinate direction. As a result, the proposed MDI-SG method effectively circumvents the curse of dimensionality suffered by the standard sparse grid method for high-dimensional numerical integration.
more »
« less
Sparsity of Integral Points on Moduli Spaces of Varieties
Abstract Let $$X$$ be a quasi-projective variety over a number field, admitting (after passage to $$\mathbb {C}$$) a geometric variation of Hodge structure whose period mapping has zero-dimensional fibers. Then the integral points of $$X$$ are sparse: the number of such points of height $$\leq B$$ grows slower than any positive power of $$B$$. For example, homogeneous integral polynomials in a fixed number of variables and degree, with discriminant divisible only by a fixed set of primes, are sparse when considered up to integral linear substitutions.
more »
« less
- Award ID(s):
- 2001200
- PAR ID:
- 10425471
- Date Published:
- Journal Name:
- International Mathematics Research Notices
- ISSN:
- 1073-7928
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Karshon, Yael; Melrose, Richard; Uhlmann, Gunther; Uribe, Alejandro (Ed.)Hessenberg varieties H(X,H) form a class of subvarieties of the flag variety G/B, parameterized by an operator X and certain subspaces H of the Lie algebra of G. We identify several families of Hessenberg varieties in type A_{n−1} that are T -stable subvarieties of G/B, as well as families that are invariant under a subtorus K of T. In particular, these varieties are candidates for the use of equivariant methods to study their geometry. Indeed, we are able to show that some of these varieties are unions of Schubert varieties, while others cannot be such unions. Among the T-stable Hessenberg varieties, we identify several that are GKM spaces, meaning T acts with isolated fixed points and a finite number of one-dimensional orbits, though we also show that not all Hessenberg varieties with torus actions and finitely many fixed points are GKM. We conclude with a series of open questions about Hessenberg varieties, both in type A_{n−1} and in general Lie type.more » « less
-
Abstract We study integral points on varieties with infinite étale fundamental groups. More precisely, for a number field $$F$$ and $X/F$ a smooth projective variety, we prove that for any geometrically Galois cover $$\varphi \colon Y \to X$$ of degree at least $$2\dim (X)^{2}$$, there exists an ample line bundle $$\mathscr{L}$$ on $$Y$$ such that for a general member $$D$$ of the complete linear system $$|\mathscr{L}|$$, $$D$$ is geometrically irreducible and any set of $$\varphi (D)$$-integral points on $$X$$ is finite. We apply this result to varieties with infinite étale fundamental group to give new examples of irreducible, ample divisors on varieties for which finiteness of integral points is provable.more » « less
-
Equilibria, or fixed points, play an important role in dynamical systems across various domains, yet finding them can be computationally challenging. Here, we show how to efficiently compute all equilibrium points of discrete-valued, discrete-time systems on sparse networks. Using graph partitioning, we recursively decompose the original problem into a set of smaller, simpler problems that are easy to compute, and whose solutions combine to yield the full equilibrium set. This makes it possible to find the fixed points of systems on arbitrarily large networks meeting certain criteria. This approach can also be used without computing the full equilibrium set, which may grow very large in some cases. For example, one can use this method to check the existence and total number of equilibria, or to find equilibria that are optimal with respect to a given cost function. We demonstrate the potential capabilities of this approach with examples in two scientific domains: computing the number of fixed points in brain networks and finding the minimal energy conformations of lattice-based protein folding models.more » « less
-
We explore algorithms and limitations for sparse optimization problems such as sparse linear regression and robust linear regression. The goal of the sparse linear regression problem is to identify a small number of key features, while the goal of the robust linear regression problem is to identify a small number of erroneous measurements. Specifically, the sparse linear regression problem seeks a k-sparse vector x ∈ Rd to minimize ‖Ax − b‖2, given an input matrix A ∈ Rn×d and a target vector b ∈ Rn, while the robust linear regression problem seeks a set S that ignores at most k rows and a vector x to minimize ‖(Ax − b)S ‖2. We first show bicriteria, NP-hardness of approximation for robust regression building on the work of [OWZ15] which implies a similar result for sparse regression. We further show fine-grained hardness of robust regression through a reduction from the minimum-weight k-clique conjecture. On the positive side, we give an algorithm for robust regression that achieves arbitrarily accurate additive error and uses runtime that closely matches the lower bound from the fine-grained hardness result, as well as an algorithm for sparse regression with similar runtime. Both our upper and lower bounds rely on a general reduction from robust linear regression to sparse regression that we introduce. Our algorithms, inspired by the 3SUM problem, use approximate nearest neighbor data structures and may be of independent interest for solving sparse optimization problems. For instance, we demonstrate that our techniques can also be used for the well-studied sparse PCA problem.more » « less
An official website of the United States government

