skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Simple and Efficient Strategy for the Coin Weighing Problem with a Spring Scale
This paper considers a generalized version of the coin weighing problem with a spring scale that lies at the intersection of group testing and compressed sensing problems. Given a collection of n ≥ 2 coins of total weight d (for a known integer d), where the weight of each coin is an unknown integer in the range of {0, 1, ..., k} (for a known integer k ≥ 1), the goal is to determine the weight of each coin by weighing subsets of coins in a spring scale. The problem is to devise a weighing strategy that minimizes the average number of weighings over all possible weight configurations. For d = k = 1, an adaptive bisecting weighing strategy is known to be optimal. However, even the simplest non-trivial case of the problem, i.e., d = k = 2, is still open. For this case, we propose and analyze a simple and effective adaptive weighing strategy. Our analysis shows that the proposed strategy requires about 1.365log2n-0.5 weighings on average. As n grows unbounded, the proposed strategy, when compared to an optimal strategy within the commonly-used class of nested strategies, requires about 31.75% less number of weighings on average; and in comparison with the information-theoretic lower bound, it requires at most about 8.16% extra number of weighings on average.  more » « less
Award ID(s):
1642983
PAR ID:
10076574
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE International Symposium on Information Theory (ISIT)
Page Range / eLocation ID:
1730 to 1734
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In the k -cut problem, we want to find the lowest-weight set of edges whose deletion breaks a given (multi)graph into k connected components. Algorithms of Karger and Stein can solve this in roughly O ( n 2k ) time. However, lower bounds from conjectures about the k -clique problem imply that Ω ( n (1- o (1)) k ) time is likely needed. Recent results of Gupta, Lee, and Li have given new algorithms for general k -cut in n 1.98k + O(1) time, as well as specialized algorithms with better performance for certain classes of graphs (e.g., for small integer edge weights). In this work, we resolve the problem for general graphs. We show that the Contraction Algorithm of Karger outputs any fixed k -cut of weight α λ k with probability Ω k ( n - α k ), where λ k denotes the minimum k -cut weight. This also gives an extremal bound of O k ( n k ) on the number of minimum k -cuts and an algorithm to compute λ k with roughly n k polylog( n ) runtime. Both are tight up to lower-order factors, with the algorithmic lower bound assuming hardness of max-weight k -clique. The first main ingredient in our result is an extremal bound on the number of cuts of weight less than 2 λ k / k , using the Sunflower lemma. The second ingredient is a fine-grained analysis of how the graph shrinks—and how the average degree evolves—in the Karger process. 
    more » « less
  2. This paper formulates the cache-aided multi-user Private Information Retrieval (MuPIR) problem, including K u cache-equipped users, each of which wishes to retrieve a desired message efficiently from N distributed databases with access to K independent messages. Privacy of the users’ demands requires that any individual database can not learn anything about the demands of the users. The load of this problem is defined as the average number of downloaded bits per desired message bit. The goal is to find the optimal memory-load trade-off while preserving the demand privacy. Besides the formulation of the MuPIR problem, the contribution of this paper is two-fold. First, we characterize the optimal memory-load trade-off for a system with N = 2 databases, K = 2 messages and K u = 2 users demanding distinct messages; Second, a product design with order optimality guarantee is proposed. In addition, the product design can achieve the optimal load when the cache memory is large enough. The product design embeds the well-known Sun-Jafar PIR scheme into coded caching, in order to benefit from the coded caching gain while preserving the privacy of the users’ demands. 
    more » « less
  3. We study the A-optimal design problem where we are given vectors υ1, …, υn ∊ ℝd, an integer k ≥ d, and the goal is to select a set S of k vectors that minimizes the trace of (∑i∊Svivi⊺)−1. Traditionally, the problem is an instance of optimal design of experiments in statistics [35] where each vector corresponds to a linear measurement of an unknown vector and the goal is to pick k of them that minimize the average variance of the error in the maximum likelihood estimate of the vector being measured. The problem also finds applications in sensor placement in wireless networks [22], sparse least squares regression [8], feature selection for k-means clustering [9], and matrix approximation [13, 14, 5]. In this paper, we introduce proportional volume sampling to obtain improved approximation algorithms for A-optimal design. Given a matrix, proportional volume sampling involves picking a set of columns S of size k with probability proportional to µ(S) times det(∑i∊Svivi⊺) for some measure µ. Our main result is to show the approximability of the A-optimal design problem can be reduced to approximate independence properties of the measure µ. We appeal to hardcore distributions as candidate distributions µ that allow us to obtain improved approximation algorithms for the A-optimal design. Our results include a d-approximation when k = d, an (1 + ∊)-approximation when and -approximation when repetitions of vectors are allowed in the solution. We also consider generalization of the problem for k ≤ d and obtain a k-approximation. We also show that the proportional volume sampling algorithm gives approximation algorithms for other optimal design objectives (such as D-optimal design [36] and generalized ratio objective [27]) matching or improving previous best known results. Interestingly, we show that a similar guarantee cannot be obtained for the E-optimal design problem. We also show that the A-optimal design problem is NP-hard to approximate within a fixed constant when k = d. 
    more » « less
  4. Lysyanskaya, Anna; Handschuh, Helena (Ed.)
    We study the black-box function inversion problem, which is the problem of finding x[N] such that f(x)=y, given as input some challenge point y in the image of a function f:[N][N], using T oracle queries to f and preprocessed advice 01S depending on f. We prove a number of new results about this problem, as follows. 1. We show an algorithm that works for any T and S satisfying TS2maxST=(N3) . In the important setting when ST, this improves on the celebrated algorithm of Fiat and Naor [STOC, 1991], which requires TS3N3. E.g., Fiat and Naor's algorithm is only non-trivial for SN23 , while our algorithm gives a non-trivial tradeoff for any SN12 . (Our algorithm and analysis are quite simple. As a consequence of this, we also give a self-contained and simple proof of Fiat and Naor's original result, with certain optimizations left out for simplicity.) 2. We show a non-adaptive algorithm (i.e., an algorithm whose ith query xi is chosen based entirely on and y, and not on the f(x1)f(xi−1)) that works for any T and S satisfying S=(Nlog(NT)) giving the first non-trivial non-adaptive algorithm for this problem. E.g., setting T=Npolylog(N) gives S=(NloglogN). This answers a question due to Corrigan-Gibbs and Kogan [TCC, 2019], who asked whether it was possible for a non-adaptive algorithm to work with parameters T and S satisfying T+SlogNo(N) . We also observe that our non-adaptive algorithm is what we call a guess-and-check algorithm, that is, it is non-adaptive and its final output is always one of the oracle queries x1xT. For guess-and-check algorithms, we prove a matching lower bound, therefore completely characterizing the achievable parameters (ST) for this natural class of algorithms. (Corrigan-Gibbs and Kogan showed that any such lower bound for arbitrary non-adaptive algorithms would imply new circuit lower bounds.) 3. We show equivalence between function inversion and a natural decision version of the problem in both the worst case and the average case, and similarly for functions f:[N][M] with different ranges. All of the above results are most naturally described in a model with shared randomness (i.e., random coins shared between the preprocessing algorithm and the online algorithm). However, as an additional contribution, we show (using a technique from communication complexity due to Newman [IPL, 1991]) how to generically convert any algorithm that uses shared randomness into one that does not. 
    more » « less
  5. We develop a general framework, called approximately-diverse dynamic programming (ADDP) that can be used to generate a collection of k≥2 maximally diverse solutions to various geometric and combinatorial optimization problems. Given an approximation factor 0≤c≤1, this framework also allows for maximizing diversity in the larger space of c-approximate solutions. We focus on two geometric problems to showcase this technique: 1. Given a polygon P, an integer k≥2 and a value c≤1, generate k maximally diverse c-nice triangulations of P. Here, a c-nice triangulation is one that is c-approximately optimal with respect to a given quality measure σ. 2. Given a planar graph G, an integer k≥2 and a value c≤1, generate k maximally diverse c-optimal Independent Sets (or, Vertex Covers). Here, an independent set S is said to be c-optimal if |S|≥c|S′| for any independent set S′ of G. Given a set of k solutions to the above problems, the diversity measure we focus on is the average distance between the solutions, where d(X,Y)=|XΔY|. For arbitrary polygons and a wide range of quality measures, we give poly(n,k) time (1−Θ(1/k))-approximation algorithms for the diverse triangulation problem. For the diverse independent set and vertex cover problems on planar graphs, we give an algorithm that runs in time 2^(O(k.δ^(−1).ϵ^(−2)).n^O(1/ϵ) and returns (1−ϵ)-approximately diverse (1−δ)c-optimal independent sets or vertex covers. Our triangulation results are the first algorithmic results on computing collections of diverse geometric objects, and our planar graph results are the first PTAS for the diverse versions of any NP-complete problem. Additionally, we also provide applications of this technique to diverse variants of other geometric problems. 
    more » « less