skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Comparing Weak References For Quantum Long-Baseline Telescopy
We compare the signal-to-noise ratio for different measurements that could be used for stellar interferometry. We find that single-photon sources with number-resolved detection outperform other weak local oscillator states.  more » « less
Award ID(s):
2326803
PAR ID:
10538657
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Optica
Date Published:
Format(s):
Medium: X
Location:
Rotterdam, Netherlands
Sponsoring Org:
National Science Foundation
More Like this
  1. We study the connection between multicalibration and boosting for squared error regression. First we prove a useful characterization of multicalibration in terms of a ``swap regret'' like condition on squared error. Using this characterization, we give an exceedingly simple algorithm that can be analyzed both as a boosting algorithm for regression and as a multicalibration algorithm for a class H that makes use only of a standard squared error regression oracle for H. We give a weak learning assumption on H that ensures convergence to Bayes optimality without the need to make any realizability assumptions -- giving us an agnostic boosting algorithm for regression. We then show that our weak learning assumption on H is both necessary and sufficient for multicalibration with respect to H to imply Bayes optimality. We also show that if H satisfies our weak learning condition relative to another class C then multicalibration with respect to H implies multicalibration with respect to C. Finally we investigate the empirical performance of our algorithm experimentally using an open source implementation that we make available. 
    more » « less
  2. Universal displacements are those displacements that can be maintained, in the absence of body forces, by applying only boundary tractions for any material in a given class of materials. Therefore, equilibrium equations must be satisfied for arbitrary elastic moduli for a given anisotropy class. These conditions can be expressed as a set of partial differential equations for the displacement field that we call universality constraints. The classification of universal displacements in homogeneous linear elasticity has been completed for all the eight anisotropy classes. Here, we extend our previous work by studying universal displacements in inhomogeneous anisotropic linear elasticity assuming that the directions of anisotropy are known. We show that universality constraints of inhomogeneous linear elasticity include those of homogeneous linear elasticity. For each class and for its known universal displacements, we find the most general inhomogeneous elastic moduli that are consistent with the universality constrains. It is known that the larger the symmetry group, the larger the space of universal displacements. We show that the larger the symmetry group, the more severe the universality constraints are on the inhomogeneities of the elastic moduli. In particular, we show that inhomogeneous isotropic and inhomogeneous cubic linear elastic solids do not admit universal displacements and we completely characterize the universal inhomogeneities for the other six anisotropy classes. 
    more » « less
  3. We develop an algorithm for solving the general grade-two model of non-Newtonian fluids which for the first time includes inflow boundary conditions. The algorithm also allows for both of the rheological parameters to be chosen independently. The proposed algorithm couples a Stokes equation for the velocity with a transport equation for an auxiliary vector-valued function. We prove that this model is well posed using the algorithm that we show converges geometrically in suitable Sobolev spaces for sufficiently small data. We demonstrate computationally that this algorithm can be successfully discretized and that it can converge to solutions for the model parameters of order one. We include in the appendix a description of appropriate boundary conditions for the auxiliary variable in standard geometries. 
    more » « less
  4. null (Ed.)
    A two-part tariff is a pricing scheme that consists of an up-front lump sum fee and a per unit fee. Various products in the real world are sold via a menu, or list, of two-part tariffs---for example gym memberships, cell phone data plans, etc. We study learning high-revenue menus of two-part tariffs from buyer valuation data, in the setting where the mechanism designer has access to samples from the distribution over buyers' values rather than an explicit description thereof. Our algorithms have clear direct uses, and provide the missing piece for the recent generalization theory of two-part tariffs. We present a polynomial time algorithm for optimizing one two-part tariff. We also present an algorithm for optimizing a length-L menu of two-part tariffs with run time exponential in L but polynomial in all other problem parameters. We then generalize the problem to multiple markets. We prove how many samples suffice to guarantee that a two-part tariff scheme that is feasible on the samples is also feasible on a new problem instance with high probability. We then show that computing revenue-maximizing feasible prices is hard even for buyers with additive valuations. Then, for buyers with identical valuation distributions, we present a condition that is sufficient for the two-part tariff scheme from the unsegmented setting to be optimal for the market-segmented setting. Finally, we prove a generalization result that states how many samples suffice so that we can compute the unsegmented solution on the samples and still be guaranteed that we get a near-optimal solution for the market-segmented setting with high probability. 
    more » « less
  5. A t-emulator of a graph G is a graph H that approximates its pairwise shortest path distances up to multiplicative t error. We study fault tolerant t-emulators, under the model recently introduced by Bodwin, Dinitz, and Nazari [ITCS 2022] for vertex failures. In this paper we consider the version for edge failures, and show that they exhibit surprisingly different behavior. In particular, our main result is that, for (2k-1)-emulators with k odd, we can tolerate a polynomial number of edge faults for free. For example: for any n-node input graph, we construct a 5-emulator (k = 3) on O(n^{4/3}) edges that is robust to f = O(n^{2/9}) edge faults. It is well known that Ω(n^{4/3}) edges are necessary even if the 5-emulator does not need to tolerate any faults. Thus we pay no extra cost in the size to gain this fault tolerance. We leave open the precise range of free fault tolerance for odd k, and whether a similar phenomenon can be proved for even k. 
    more » « less