skip to main content

Title: A Simple and Efficient Strategy for the Coin Weighing Problem with a Spring Scale
This paper considers a generalized version of the coin weighing problem with a spring scale that lies at the intersection of group testing and compressed sensing problems. Given a collection of n ≥ 2 coins of total weight d (for a known integer d), where the weight of each coin is an unknown integer in the range of {0, 1, ..., k} (for a known integer k ≥ 1), the goal is to determine the weight of each coin by weighing subsets of coins in a spring scale. The problem is to devise a weighing strategy that minimizes the average number of weighings over all possible weight configurations. For d = k = 1, an adaptive bisecting weighing strategy is known to be optimal. However, even the simplest non-trivial case of the problem, i.e., d = k = 2, is still open. For this case, we propose and analyze a simple and effective adaptive weighing strategy. Our analysis shows that the proposed strategy requires about 1.365log2n-0.5 weighings on average. As n grows unbounded, the proposed strategy, when compared to an optimal strategy within the commonly-used class of nested strategies, requires about 31.75% less number of weighings on average; and in more » comparison with the information-theoretic lower bound, it requires at most about 8.16% extra number of weighings on average. « less
Authors:
; ; ;
Award ID(s):
1642983
Publication Date:
NSF-PAR ID:
10076574
Journal Name:
IEEE International Symposium on Information Theory (ISIT)
Page Range or eLocation-ID:
1730 to 1734
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper formulates the cache-aided multi-user Private Information Retrieval (MuPIR) problem, including K u cache-equipped users, each of which wishes to retrieve a desired message efficiently from N distributed databases with access to K independent messages. Privacy of the users’ demands requires that any individual database can not learn anything about the demands of the users. The load of this problem is defined as the average number of downloaded bits per desired message bit. The goal is to find the optimal memory-load trade-off while preserving the demand privacy. Besides the formulation of the MuPIR problem, the contribution of this papermore »is two-fold. First, we characterize the optimal memory-load trade-off for a system with N = 2 databases, K = 2 messages and K u = 2 users demanding distinct messages; Second, a product design with order optimality guarantee is proposed. In addition, the product design can achieve the optimal load when the cache memory is large enough. The product design embeds the well-known Sun-Jafar PIR scheme into coded caching, in order to benefit from the coded caching gain while preserving the privacy of the users’ demands.« less
  2. Private information retrieval (PIR) allows a user to retrieve a desired message out of K possible messages from N databases without revealing the identity of the desired message. There has been significant recent progress on understanding fundamental information-theoretic limits of PIR, and in particular the download cost of PIR for several variations. Majority of existing works however, assume the presence of replicated databases, each storing all the K messages. In this work, we consider the problem of PIR from storage constrained databases. Each database has a storage capacity of μKL bits, where K is the number of messages, L ismore »the size of each message in bits, and μ ∈ [1/N, 1] is the normalized storage. In the storage constrained PIR problem, there are two key design questions: a) how to store content across each database under storage constraints; and b) construction of schemes that allow efficient PIR through storage constrained databases. The main contribution of this work is a general achievable scheme for PIR from storage constrained databases for any value of storage. In particular, for any (N, K), with normalized storage μ = t/N, where the parameter t can take integer values t ∈ {1, 2, ..., N}, we show that our proposed PIR scheme achieves a download cost of (1 + 1/t + 1/2 + ⋯ + 1/t K-1 ). The extreme case when μ = 1 (i.e., t = N) corresponds to the setting of replicated databases with full storage. For this extremal setting, our scheme recovers the information-theoretically optimal download cost characterized by Sun and Jafar as (1 + 1/N + ⋯ +1/N K-1 ). For the other extreme, when μ = 1/N (i.e., t = 1), the proposed scheme achieves a download cost of K. The most interesting aspect of the result is that for intermediate values of storage, i.e., 1/N <; μ <; 1, the proposed scheme can strictly outperform memory-sharing between extreme values of storage.« less
  3. Abstract We consider the problem of covering multiple submodular constraints. Given a finite ground set N , a weight function $$w: N \rightarrow \mathbb {R}_+$$ w : N → R + , r monotone submodular functions $$f_1,f_2,\ldots ,f_r$$ f 1 , f 2 , … , f r over N and requirements $$k_1,k_2,\ldots ,k_r$$ k 1 , k 2 , … , k r the goal is to find a minimum weight subset $$S \subseteq N$$ S ⊆ N such that $$f_i(S) \ge k_i$$ f i ( S ) ≥ k i for $$1 \le i \le r$$ 1 ≤more »i ≤ r . We refer to this problem as Multi-Submod-Cover and it was recently considered by Har-Peled and Jones (Few cuts meet many point sets. CoRR. arxiv:abs1808.03260 Har-Peled and Jones 2018) who were motivated by an application in geometry. Even with $$r=1$$ r = 1 Multi-Submod-Cover generalizes the well-known Submodular Set Cover problem ( Submod-SC ), and it can also be easily reduced to Submod-SC . A simple greedy algorithm gives an $$O(\log (kr))$$ O ( log ( k r ) ) approximation where $$k = \sum _i k_i$$ k = ∑ i k i and this ratio cannot be improved in the general case. In this paper, motivated by several concrete applications, we consider two ways to improve upon the approximation given by the greedy algorithm. First, we give a bicriteria approximation algorithm for Multi-Submod-Cover that covers each constraint to within a factor of $$(1-1/e-\varepsilon )$$ ( 1 - 1 / e - ε ) while incurring an approximation of $$O(\frac{1}{\epsilon }\log r)$$ O ( 1 ϵ log r ) in the cost. Second, we consider the special case when each $$f_i$$ f i is a obtained from a truncated coverage function and obtain an algorithm that generalizes previous work on partial set cover ( Partial-SC ), covering integer programs ( CIPs ) and multiple vertex cover constraints Bera et al. (Theoret Comput Sci 555:2–8 Bera et al. 2014). Both these algorithms are based on mathematical programming relaxations that avoid the limitations of the greedy algorithm. We demonstrate the implications of our algorithms and related ideas to several applications ranging from geometric covering problems to clustering with outliers. Our work highlights the utility of the high-level model and the lens of submodularity in addressing this class of covering problems.« less
  4. In the k -cut problem, we want to find the lowest-weight set of edges whose deletion breaks a given (multi)graph into k connected components. Algorithms of Karger and Stein can solve this in roughly O ( n 2k ) time. However, lower bounds from conjectures about the k -clique problem imply that Ω ( n (1- o (1)) k ) time is likely needed. Recent results of Gupta, Lee, and Li have given new algorithms for general k -cut in n 1.98k + O(1) time, as well as specialized algorithms with better performance for certain classes of graphs (e.g., formore »small integer edge weights). In this work, we resolve the problem for general graphs. We show that the Contraction Algorithm of Karger outputs any fixed k -cut of weight α λ k with probability Ω k ( n - α k ), where λ k denotes the minimum k -cut weight. This also gives an extremal bound of O k ( n k ) on the number of minimum k -cuts and an algorithm to compute λ k with roughly n k polylog( n ) runtime. Both are tight up to lower-order factors, with the algorithmic lower bound assuming hardness of max-weight k -clique. The first main ingredient in our result is an extremal bound on the number of cuts of weight less than 2 λ k / k , using the Sunflower lemma. The second ingredient is a fine-grained analysis of how the graph shrinks—and how the average degree evolves—in the Karger process.« less
  5. We consider the problem of designing sublinear time algorithms for estimating the cost of minimum] metric traveling salesman (TSP) tour. Specifically, given access to a n × n distance matrix D that specifies pairwise distances between n points, the goal is to estimate the TSP cost by performing only sublinear (in the size of D) queries. For the closely related problem of estimating the weight of a metric minimum spanning tree (MST), it is known that for any epsilon > 0, there exists an O^~(n/epsilon^O(1))-time algorithm that returns a (1+epsilon)-approximate estimate of the MST cost. This result immediately implies anmore »O^~(n/epsilon^O(1)) time algorithm to estimate the TSP cost to within a (2 + epsilon) factor for any epsilon > 0. However, no o(n^2)-time algorithms are known to approximate metric TSP to a factor that is strictly better than 2. On the other hand, there were also no known barriers that rule out existence of (1 + epsilon)-approximate estimation algorithms for metric TSP with O^~ (n) time for any fixed epsilon > 0. In this paper, we make progress on both algorithms and lower bounds for estimating metric TSP cost. On the algorithmic side, we first consider the graphic TSP problem where the metric D corresponds to shortest path distances in a connected unweighted undirected graph. We show that there exists an O^~(n) time algorithm that estimates the cost of graphic TSP to within a factor of (2 − epsilon_0) for some epsilon_0 > 0. This is the first sublinear cost estimation algorithm for graphic TSP that achieves an approximation factor less than 2. We also consider another well-studied special case of metric TSP, namely, (1, 2)-TSP where all distances are either 1 or 2, and give an O^~(n ^ 1.5) time algorithm to estimate optimal cost to within a factor of 1.625. Our estimation algorithms for graphic TSP as well as for (1, 2)-TSP naturally lend themselves to O^~(n) space streaming algorithms that give an 11/6-approximation for graphic TSP and a 1.625-approximation for (1, 2)-TSP. These results motivate the natural question if analogously to metric MST, for any epsilon > 0, (1 + epsilon)-approximate estimates can be obtained for graphic TSP and (1, 2)-TSP using O^~ (n) queries. We answer this question in the negative – there exists an epsilon_0 > 0, such that any algorithm that estimates the cost of graphic TSP ((1, 2)-TSP) to within a (1 + epsilon_0)-factor, necessarily requires (n^2) queries. This lower bound result highlights a sharp separation between the metric MST and metric TSP problems. Similarly to many classical approximation algorithms for TSP, our sublinear time estimation algorithms utilize subroutines for estimating the size of a maximum matching in the underlying graph. We show that this is not merely an artifact of our approach, and that for any epsilon > 0, any algorithm that estimates the cost of graphic TSP or (1, 2)-TSP to within a (1 + epsilon)-factor, can also be used to estimate the size of a maximum matching in a bipartite graph to within an epsilon n additive error. This connection allows us to translate known lower bounds for matching size estimation in various models to similar lower bounds for metric TSP cost estimation.« less