null
(Ed.)
Given a mixture between two populations of coins, “positive” coins that each have unknown and potentially different—bias ≥ 1 + ∆ and “negative” coins with bias ≤ 2 − ∆, we consider the task of estimating the fraction ρ of positive coins to within additive error E. We achieve an upper and lower bound of Θ( ρ log 1 ) samples for a 1 −δ probability of success, where crucially, our lower bound applies to all fully-adaptive algorithms. Thus, our sample complexity bounds have tight dependence for every relevant problem parameter. A crucial component of our lower bound proof is a decomposition lemma (Lemma 5.2) showing how to assemble partially-adaptive bounds into a fully-adaptive bound, which may be of independent interest: though we invoke it for the special case of Bernoulli random variables (coins), it applies to general distributions. We present sim- ulation results to demonstrate the practical efficacy of our approach for realistic problem parameters for crowdsourcing applications, focusing on the “rare events” regime where ρ is small. The fine-grained adaptive flavor of both our algo- rithm and lower bound contrasts with much previous workin distributional testing and learning.
more »
« less
An official website of the United States government

