skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Entropy Under Additive Bernoulli and Spherical Noises
Let $Z^n$ be iid $$\text{Ber}(\delta)$$ and $U^n$ be uniform on the set of all binary vectors of weight $$\delta n$$ (Hamming sphere). As is well known, the entropies of $Z^n$ and $U^n$ are within $$O(\log n)$$. However, if $X^n$ is another binary random variable independent of $Z^n$ and $U^n$, we show that $H(X^n+U^n)$ and $H(X^n+Z^n)$ are within $$O(\sqrt{n})$$ and this estimate is tight. The bound is shown via coupling method. Tightness follows from the observation that the channels $$x^n\mapsto x^n+U^n$$ and $$x^n\mapsto x^n+Z^n$$ have similar capacities, but the former has zero dispersion. Finally, we show that despite the $$\sqrt{n}$$ slack in general, the Mrs. Gerber Lemma for $H(X^n+U^n)$ holds with only an $$O(\log n)$$ correction compared to its brethren for $H(X^n+Z^n)$.  more » « less
Award ID(s):
1717842 1253205
PAR ID:
10063532
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2018 IEEE Int. Symp. Inf. Theory (ISIT)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Existing parallel algorithms for wavelet tree construction have a work complexity of $$O(n\log\sigma)$$. This paper presents parallel algorithms for the problem with improved work complexity. Our first algorithm is based on parallel integer sorting and has either $$O(n\log\log n\lceil\log\sigma/\sqrt{\log n\log\log n}\rceil)$$ work and polylogarithmic depth, or $$O(n\lceil\log\sigma/\sqrt{\log n}\rceil)$$ work and sub-linear depth. We also describe another algorithm that has $$O(n\lceil\log\sigma/\sqrt{\log n} \rceil)$$ work and $$O(\sigma+\log n)$$ depth. We then show how to use similar ideas to construct variants of wavelet trees (arbitrary-shaped binary trees and multiary trees) as well as wavelet matrices in parallel with lower work complexity than prior algorithms. Finally, we show that the rank and select structures on binary sequences and multiary sequences, which are stored on wavelet tree nodes, can be constructed in parallel with improved work bounds, matching those of the best existing sequential algorithms for constructing rank and select structures. 
    more » « less
  2. The noise sensitivity of a Boolean function f: {0,1}^n - > {0,1} is one of its fundamental properties. For noise parameter delta, the noise sensitivity is denoted as NS_{delta}[f]. This quantity is defined as follows: First, pick x = (x_1,...,x_n) uniformly at random from {0,1}^n, then pick z by flipping each x_i independently with probability delta. NS_{delta}[f] is defined to equal Pr [f(x) != f(z)]. Much of the existing literature on noise sensitivity explores the following two directions: (1) Showing that functions with low noise-sensitivity are structured in certain ways. (2) Mathematically showing that certain classes of functions have low noise sensitivity. Combined, these two research directions show that certain classes of functions have low noise sensitivity and therefore have useful structure. The fundamental importance of noise sensitivity, together with this wealth of structural results, motivates the algorithmic question of approximating NS_{delta}[f] given an oracle access to the function f. We show that the standard sampling approach is essentially optimal for general Boolean functions. Therefore, we focus on estimating the noise sensitivity of monotone functions, which form an important subclass of Boolean functions, since many functions of interest are either monotone or can be simply transformed into a monotone function (for example the class of unate functions consists of all the functions that can be made monotone by reorienting some of their coordinates [O'Donnell, 2014]). Specifically, we study the algorithmic problem of approximating NS_{delta}[f] for monotone f, given the promise that NS_{delta}[f] >= 1/n^{C} for constant C, and for delta in the range 1/n <= delta <= 1/2. For such f and delta, we give a randomized algorithm performing O((min(1,sqrt{n} delta log^{1.5} n))/(NS_{delta}[f]) poly (1/epsilon)) queries and approximating NS_{delta}[f] to within a multiplicative factor of (1 +/- epsilon). Given the same constraints on f and delta, we also prove a lower bound of Omega((min(1,sqrt{n} delta))/(NS_{delta}[f] * n^{xi})) on the query complexity of any algorithm that approximates NS_{delta}[f] to within any constant factor, where xi can be any positive constant. Thus, our algorithm's query complexity is close to optimal in terms of its dependence on n. We introduce a novel descending-ascending view of noise sensitivity, and use it as a central tool for the analysis of our algorithm. To prove lower bounds on query complexity, we develop a technique that reduces computational questions about query complexity to combinatorial questions about the existence of "thin" functions with certain properties. The existence of such "thin" functions is proved using the probabilistic method. These techniques also yield new lower bounds on the query complexity of approximating other fundamental properties of Boolean functions: the total influence and the bias. 
    more » « less
  3. Abstract: We consider the quadratic Zakharov-Kuznetsov equation $$\partial_t u + \partial_x \Delta u + \partial_x u^2=0$$ on $$\Bbb{R}^3$$. A solitary wave solution is given by $Q(x-t,y,z)$, where $$Q$$ is the ground state solution to $$-Q+\Delta Q+Q^2=0$$. We prove the asymptotic stability of these solitary wave solutions. Specifically, we show that initial data close to $$Q$$ in the energy space, evolves to a solution that, as $$t\to\infty$$, converges to a rescaling and shift of $Q(x-t,y,z)$ in $L^2$ in a rightward shifting region $$x>\delta t-\tan\theta\sqrt{y^2+z^2}$$ for $$0\leq\theta\leq{\pi\over 3}-\delta$$. 
    more » « less
  4. ABSTRACT We analyse the rest-optical emission-line ratios of z ∼ 1.5 galaxies drawn from the Multi-Object Spectrometer for Infra-Red Exploration Deep Evolution Field (MOSDEF) survey. Using composite spectra, we investigate the mass–metallicity relation (MZR) at z ∼ 1.5 and measure its evolution to z = 0. When using gas-phase metallicities based on the N2 line ratio, we find that the MZR evolution from z ∼ 1.5 to z = 0 depends on stellar mass, evolving by $$\Delta \rm log(\rm O/H) \sim 0.25$$ dex at M*< $$10^{9.75}\, \mathrm{M}_{\odot }$$ down to $$\Delta \rm log(\rm O/H) \sim 0.05$$ at M* ≳ $$10^{10.5}\, \mathrm{M}_{\odot }$$. In contrast, the O3N2-based MZR shows a constant offset of $$\Delta \rm log(\rm O/H) \sim 0.30$$ across all masses, consistent with previous MOSDEF results based on independent metallicity indicators, and suggesting that O3N2 provides a more robust metallicity calibration for our z ∼ 1.5 sample. We investigated the secondary dependence of the MZR on star formation rate (SFR) by measuring correlated scatter about the mean M*-specific SFR and M*−$$\log (\rm O3N2)$$ relations. We find an anticorrelation between $$\log (\rm O/H)$$ and sSFR offsets, indicating the presence of a M*−SFR−Z relation, though with limited significance. Additionally, we find that our z ∼ 1.5 stacks lie along the z = 0 metallicity sequence at fixed μ = log (M*/M⊙) − 0.6 × $$\log (\rm SFR / M_{\odot } \, yr^{-1})$$ suggesting that the z ∼ 1.5 stacks can be described by the z = 0 fundamental metallicity relation (FMR). However, using different calibrations can shift the calculated metallicities off of the local FMR, indicating that appropriate calibrations are essential for understanding metallicity evolution with redshift. Finally, understanding how [N ii]/H α scales with galaxy properties is crucial to accurately describe the effects of blended [N ii] and H α on redshift and H α fiux measurements in future large surveys utilizing low-resolution spectra such as with Euclid and the Roman Space Telescope. 
    more » « less
  5. Given a set $$P$$ of $$n$$ points in the plane, we consider the problem of computing the number of points of $$P$$ in a query unit disk (i.e., all query disks have the same radius). We show that the main techniques for simplex range searching in the plane can be adapted to this problem. For example, by adapting Matoušek's results, we can build a data structure of $O(n)$ space in $$O(n^{1+\delta})$$ time (for any $$\delta>0$$) so that each query can be answered in $$O(\sqrt{n})$$ time; alternatively, we can build a data structure of $$O(n^2/\log^2 n)$$ space with $$O(n^{1+\delta})$$ preprocessing time (for any $$\delta>0$$) and $$O(\log n)$$ query time. Our techniques lead to improvements for several other classical problems in computational geometry. 1. Given a set of $$n$$ unit disks and a set of $$n$$ points in the plane, the batched unit-disk range counting problem is to compute for each disk the number of points in it. Previous work [Katz and Sharir, 1997] solved the problem in $$O(n^{4/3}\log n)$$ time. We give a new algorithm of $$O(n^{4/3})$$ time, which is optimal as it matches an $$\Omega(n^{4/3})$$-time lower bound. For small $$\chi$$, where $$\chi$$ is the number of pairs of unit disks that intersect, we further improve the algorithm to $$O(n^{2/3}\chi^{1/3}+n^{1+\delta})$$ time, for any $$\delta>0$$. 2. The above result immediately leads to an $$O(n^{4/3})$$ time optimal algorithm for counting the intersecting pairs of circles for a set of $$n$$ unit circles in the plane. The previous best algorithms solve the problem in $$O(n^{4/3}\log n)$$ deterministic time [Katz and Sharir, 1997] or in $$O(n^{4/3}\log^{2/3} n)$$ expected time by a randomized algorithm [Agarwal, Pellegrini, and Sharir, 1993]. 3. Given a set $$P$$ of $$n$$ points in the plane and an integer $$k$$, the distance selection problem is to find the $$k$$-th smallest distance among all pairwise distances of $$P$$. The problem can be solved in $$O(n^{4/3}\log^2 n)$$ deterministic time [Katz and Sharir, 1997] or in $$O(n\log n+n^{2/3}k^{1/3}\log^{5/3}n)$$ expected time by a randomized algorithm [Chan, 2001]. Our new randomized algorithm runs in $$O(n\log n +n^{2/3}k^{1/3}\log n)$$ expected time. 4. Given a set $$P$$ of $$n$$ points in the plane, the discrete $$2$$-center problem is to compute two smallest congruent disks whose centers are in $$P$$ and whose union covers $$P$$. An $$O(n^{4/3}\log^5 n)$$-time algorithm was known [Agarwal, Sharir, and Welzl, 1998]. Our techniques yield a deterministic algorithm of $$O(n^{4/3}\log^{10/3} n\cdot (\log\log n)^{O(1)})$$ time and a randomized algorithm of $$O(n^{4/3}\log^3 n\cdot (\log\log n)^{1/3})$$ expected time. 
    more » « less