 Award ID(s):
 1650733
 Publication Date:
 NSFPAR ID:
 10026315
 Journal Name:
 Annual Symposium on Foundations of Computer Science
 ISSN:
 02725428
 Sponsoring Org:
 National Science Foundation
More Like this

Tauman Kalai, Yael (Ed.)We introduce and study the communication complexity of computing the inner product of two vectors, where the input is restricted w.r.t. a norm N on the space ℝⁿ. Here, Alice and Bob hold two vectors v,u such that ‖v‖_N ≤ 1 and ‖u‖_{N^*} ≤ 1, where N^* is the dual norm. The goal is to compute their inner product ⟨v,u⟩ up to an ε additive term. The problem is denoted by IP_N, and generalizes important previously studied problems, such as: (1) Computing the expectation 𝔼_{x∼𝒟}[f(x)] when Alice holds 𝒟 and Bob holds f is equivalent to IP_{𝓁₁}. (2) Computing v^TAv where Alice has a symmetric matrix with bounded operator norm (denoted S_∞) and Bob has a vector v where ‖v‖₂ = 1. This problem is complete for quantum communication complexity and is equivalent to IP_{S_∞}. We systematically study IP_N, showing the following results, near tight in most cases: 1) For any symmetric norm N, given ‖v‖_N ≤ 1 and ‖u‖_{N^*} ≤ 1 there is a randomized protocol using 𝒪̃(ε^{6} log n) bits of communication that returns a value in ⟨u,v⟩±ε with probability 2/3  we will denote this by ℛ_{ε,1/3}(IP_N) ≤ 𝒪̃(ε^{6} log n). In a special case where Nmore »

Embedding properties of network realizations of dissipative reduced order models Jörn Zimmerling, Mikhail Zaslavsky,Rob Remis, Shasri Moskow, Alexander Mamonov, Murthy Guddati, Vladimir Druskin, and Liliana Borcea Mathematical Sciences Department, Worcester Polytechnic Institute https://www.wpi.edu/people/vdruskin Abstract Realizations of reduced order models of passive SISO or MIMO LTI problems can be transformed to tridiagonal and blocktridiagonal forms, respectively, via dierent modications of the Lanczos algorithm. Generally, such realizations can be interpreted as ladder resistorcapacitorinductor (RCL) networks. They gave rise to network syntheses in the rst half of the 20th century that was at the base of modern electronics design and consecutively to MOR that tremendously impacted many areas of engineering (electrical, mechanical, aerospace, etc.) by enabling ecient compression of the underlining dynamical systems. In his seminal 1950s works Krein realized that in addition to their compressing properties, network realizations can be used to embed the data back into the state space of the underlying continuum problems. In more recent works of the authors Krein's ideas gave rise to socalled nitedierence Gaussian quadrature rules (FDGQR), allowing to approximately map the ROM statespace representation to its full order continuum counterpart on a judicially chosen grid. Thus, the state variables can be accessed directly from themore »

In a recent work (Ghazi et al., SODA 2016), the authors with Komargodski and Kothari initiated the study of communication with contextual uncertainty, a setup aiming to understand how efficient communication is possible when the communicating parties imperfectly share a huge context. In this setting, Alice is given a function f and an input string x, and Bob is given a function g and an input string y. The pair (x,y) comes from a known distribution mu and f and g are guaranteed to be close under this distribution. Alice and Bob wish to compute g(x,y) with high probability. The lack of agreement between Alice and Bob on the function that is being computed captures the uncertainty in the context. The previous work showed that any problem with oneway communication complexity k in the standard model (i.e., without uncertainty, in other words, under the promise that f=g) has publiccoin communication at most O(k(1+I)) bits in the uncertain case, where I is the mutual information between x and y. Moreover, a lower bound of Omega(sqrt{I}) bits on the publiccoin uncertain communication was also shown. However, an important question that was left open is related to the power that public randomness bringsmore »

Abstract We continue the program of proving circuit lower bounds via circuit satisfiability algorithms. So far, this program has yielded several concrete results, proving that functions in
and other complexity classes do not have small circuits (in the worst case and/or on average) from various circuit classes$\mathsf {Quasi}\text {}\mathsf {NP} = \mathsf {NTIME}[n^{(\log n)^{O(1)}}]$ $\mathrm{Quasi}\mathrm{NP}=\mathrm{NTIME}\left[{n}^{{\left(\mathrm{log}n\right)}^{O\left(1\right)}}\right]$ , by showing that$\mathcal { C}$ $C$ admits nontrivial satisfiability and/or$\mathcal { C}$ $C$# SAT algorithms which beat exhaustive search by a minor amount. In this paper, we present a new strong lower bound consequence of having a nontrivial# SAT algorithm for a circuit class . Say that a symmetric Boolean function${\mathcal C}$ $C$f (x _{1},…,x _{n}) issparse if it outputs 1 onO (1) values of . We show that for every sparse${\sum }_{i} x_{i}$ ${\sum}_{i}{x}_{i}$f , and for all “typical” , faster$\mathcal { C}$ $C$# SAT algorithms for circuits imply lower bounds against the circuit class$\mathcal { C}$ $C$ , which may be$f \circ \mathcal { C}$ $f\circ C$stronger than itself. In particular:$\mathcal { C}$ $C$# SAT algorithms forn ^{k}size circuits running in 2^{n}/$\mathcal { C}$ $C$n ^{k}time (for allk ) implyN E X P does not have circuits of polynomial size.$(f \circ \mathcal { C})$ $(f\circ C)$# SAT algorithms for size$2^{n^{{\varepsilon }}}$ ${2}^{{n}^{\epsilon}}$ circuits running in$\mathcal { C}$ $C$ time (for some$2^{nn^{{\varepsilon }}}$ ${2}^{n{n}^{\epsilon}}$ε > 0) implyQ u a s i N P does not have circuits of polynomial size.$(f \circ \mathcal { C})$ $(f\circ C)$Applying
# SAT algorithms from the literature, one immediate corollary of our results is thatQ u a s i N P does not haveE M A J ∘A C C ^{0}∘T H R circuits of polynomialmore » 
We study the problem of testing identity against a given distribution with a focus on the high confidence regime. More precisely, given samples from an unknown distribution p over n elements, an explicitly given distribution q, and parameters 0< epsilon, delta < 1, we wish to distinguish, with probability at least 1delta, whether the distributions are identical versus epsilonfar in total variation distance. Most prior work focused on the case that delta = Omega(1), for which the sample complexity of identity testing is known to be Theta(sqrt{n}/epsilon^2). Given such an algorithm, one can achieve arbitrarily small values of delta via blackbox amplification, which multiplies the required number of samples by Theta(log(1/delta)). We show that blackbox amplification is suboptimal for any delta = o(1), and give a new identity tester that achieves the optimal sample complexity. Our new upper and lower bounds show that the optimal sample complexity of identity testing is Theta((1/epsilon^2) (sqrt{n log(1/delta)} + log(1/delta))) for any n, epsilon, and delta. For the special case of uniformity testing, where the given distribution is the uniform distribution U_n over the domain, our new tester is surprisingly simple: to test whether p = U_n versus d_{TV} (p, U_n) >= epsilon, wemore »