skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Lambda Invariants at CM Points
Abstract In this paper, we show that $$\lambda (z_1) -\lambda (z_2)$$, $$\lambda (z_1)$$, and $$1-\lambda (z_1)$$ are all Borcherds products on $$X(2) \times X(2)$$. We then use the big CM value formula of Bruinier, Kudla, and Yang to give explicit factorization formulas for the norms of $$\lambda (\frac{d+\sqrt d}2)$$, $$1-\lambda (\frac{d+\sqrt d}2)$$, and $$\lambda (\frac{d_1+\sqrt{d_1}}2) -\lambda (\frac{d_2+\sqrt{d_2}}2)$$, with the latter under the condition $$(d_1, d_2)=1$$. Finally, we use these results to show that $$\lambda (\frac{d+\sqrt d}2)$$ is always an algebraic integer and can be easily used to construct units in the ray class field of $${\mathbb{Q}}(\sqrt{d})$$ of modulus $$2$$. In the process, we also give explicit formulas for a whole family of local Whittaker functions, which are of independent interest.  more » « less
Award ID(s):
1762289
PAR ID:
10124861
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
International Mathematics Research Notices
ISSN:
1073-7928
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract BackgroundMany bioinformatics applications involve bucketing a set of sequences where each sequence is allowed to be assigned into multiple buckets. To achieve both high sensitivity and precision, bucketing methods are desired to assign similar sequences into the same bucket while assigning dissimilar sequences into distinct buckets. Existingk-mer-based bucketing methods have been efficient in processing sequencing data with low error rates, but encounter much reduced sensitivity on data with high error rates. Locality-sensitive hashing (LSH) schemes are able to mitigate this issue through tolerating the edits in similar sequences, but state-of-the-art methods still have large gaps. ResultsIn this paper, we generalize the LSH function by allowing it to hash one sequence into multiple buckets. Formally, a bucketing function, which maps a sequence (of fixed length) into a subset of buckets, is defined to be$$(d_1, d_2)$$ ( d 1 , d 2 ) -sensitive if any two sequences within an edit distance of$$d_1$$ d 1 are mapped into at least one shared bucket, and any two sequences with distance at least$$d_2$$ d 2 are mapped into disjoint subsets of buckets. We construct locality-sensitive bucketing (LSB) functions with a variety of values of$$(d_1,d_2)$$ ( d 1 , d 2 ) and analyze their efficiency with respect to the total number of buckets needed as well as the number of buckets that a specific sequence is mapped to. We also prove lower bounds of these two parameters in different settings and show that some of our constructed LSB functions are optimal. ConclusionThese results lay the theoretical foundations for their practical use in analyzing sequences with high error rates while also providing insights for the hardness of designing ungapped LSH functions. 
    more » « less
  2. The research problem of how to use a high-speed circuit switch, typically an optical switch, to most effectively boost the switching capacity of a datacenter network, has been extensively studied. In this work, we focus on a different but related research problem that arises when multiple (say $$s$$) parallel circuit switches are used: How to best split a switching workload $$D$$ into sub-workloads $$D_1, D_2, ..., D_s$$, and give them to the $$s$$ switches as their respective workloads, so that the overall makespan of the parallel switching system is minimized? Computing such an optimal split is unfortunately NP-hard, since the circuit/optical switch incurs a nontrivial reconfiguration delay when the switch configuration has to change. In this work, we formulate a weaker form of this problem: How to minimize the total number of nonzero entries in $$D_1, D_2, ..., D_s$$ (so that the overall reconfiguration cost can be kept low), under the constraint that every row or column sum of $$D$$ (which corresponds to the workload imposed on a sending or receiving rack respectively) is evenly split? Although this weaker problem is still NP-hard, we are able to design LESS, an approximation algorithm that has a low approximation ratio of only $$1+\epsilon$$ in practice and a low computational complexity of only $O(m^2)$, where $$m = \|D\|_0$$ is the number of nonzero entries in $$D$$. Our simulation studies show that LESS results in excellent overall makespan performances under realistic datacenter traffic workloads and parameter settings. 
    more » « less
  3. Abstract Suppose that $$\Sigma ^{n}\subset \mathbb{S}^{n+1}$$ is a closed embedded minimal hypersurface. We prove that the first non-zero eigenvalue $$\lambda _{1}$$ of the induced Laplace–Beltrami operator on $$\Sigma $$ satisfies $$\lambda _{1} \geq \frac{n}{2}+ a_{n}(\Lambda ^{6} + b_{n})^{-1}$$, where $$a_{n}$$ and $$b_{n}$$ are explicit dimensional constants and $$\Lambda $$ is an upper bound for the length of the second fundamental form of $$\Sigma $$. This provides the first explicitly computable improvement on Choi and Wang’s lower bound $$\lambda _{1} \geq \frac{n}{2}$$ without any further assumptions on $$\Sigma $$. 
    more » « less
  4. In statistics and machine learning, we are interested in the eigenvectors (or singular vectors) of certain matrices (e.g.\ covariance matrices, data matrices, etc). However, those matrices are usually perturbed by noises or statistical errors, either from random sampling or structural patterns. The Davis-Kahan $$\sin \theta$$ theorem is often used to bound the difference between the eigenvectors of a matrix $$A$$ and those of a perturbed matrix $$\widetilde{A} = A + E$$, in terms of $$\ell_2$$ norm. In this paper, we prove that when $$A$$ is a low-rank and incoherent matrix, the $$\ell_{\infty}$$ norm perturbation bound of singular vectors (or eigenvectors in the symmetric case) is smaller by a factor of $$\sqrt{d_1}$$ or $$\sqrt{d_2}$$ for left and right vectors, where $$d_1$$ and $$d_2$$ are the matrix dimensions. The power of this new perturbation result is shown in robust covariance estimation, particularly when random variables have heavy tails. There, we propose new robust covariance estimators and establish their asymptotic properties using the newly developed perturbation bound. Our theoretical results are verified through extensive numerical experiments. 
    more » « less
  5. An \ell _p oblivious subspace embedding is a distribution over r \times n matrices \Pi such that for any fixed n \times d matrix A , \[ \Pr _{\Pi }[\textrm {for all }x, \ \Vert Ax\Vert _p \le \Vert \Pi Ax\Vert _p \le \kappa \Vert Ax\Vert _p] \ge 9/10,\] where r is the dimension of the embedding, \kappa is the distortion of the embedding, and for an n -dimensional vector y , \Vert y\Vert _p = (\sum _{i=1}^n |y_i|^p)^{1/p} is the \ell _p -norm. Another important property is the sparsity of \Pi , that is, the maximum number of non-zero entries per column, as this determines the running time of computing \Pi A . While for p = 2 there are nearly optimal tradeoffs in terms of the dimension, distortion, and sparsity, for the important case of 1 \le p \lt 2 , much less was known. In this article, we obtain nearly optimal tradeoffs for \ell _1 oblivious subspace embeddings, as well as new tradeoffs for 1 \lt p \lt 2 . Our main results are as follows: (1) We show for every 1 \le p \lt 2 , any oblivious subspace embedding with dimension r has distortion \[ \kappa = \Omega \left(\frac{1}{\left(\frac{1}{d}\right)^{1 / p} \log ^{2 / p}r + \left(\frac{r}{n}\right)^{1 / p - 1 / 2}}\right).\] When r = {\operatorname{poly}}(d) \ll n in applications, this gives a \kappa = \Omega (d^{1/p}\log ^{-2/p} d) lower bound, and shows the oblivious subspace embedding of Sohler and Woodruff (STOC, 2011) for p = 1 is optimal up to {\operatorname{poly}}(\log (d)) factors. (2) We give sparse oblivious subspace embeddings for every 1 \le p \lt 2 . Importantly, for p = 1 , we achieve r = O(d \log d) , \kappa = O(d \log d) and s = O(\log d) non-zero entries per column. The best previous construction with s \le {\operatorname{poly}}(\log d) is due to Woodruff and Zhang (COLT, 2013), giving \kappa = \Omega (d^2 {\operatorname{poly}}(\log d)) or \kappa = \Omega (d^{3/2} \sqrt {\log n} \cdot {\operatorname{poly}}(\log d)) and r \ge d \cdot {\operatorname{poly}}(\log d) ; in contrast our r = O(d \log d) and \kappa = O(d \log d) are optimal up to {\operatorname{poly}}(\log (d)) factors even for dense matrices. We also give (1) \ell _p oblivious subspace embeddings with an expected 1+\varepsilon number of non-zero entries per column for arbitrarily small \varepsilon \gt 0 , and (2) the first oblivious subspace embeddings for 1 \le p \lt 2 with O(1) -distortion and dimension independent of n . Oblivious subspace embeddings are crucial for distributed and streaming environments, as well as entrywise \ell _p low-rank approximation. Our results give improved algorithms for these applications. 
    more » « less