skip to main content


Title: Time and Space Optimal Counting in Population Protocols
This work concerns the general issue of combined optimality in terms of time and space complexity. In this context, we study the problem of counting resource-limited and passively mobile nodes in the model of population protocols, in which the space complexity is crucial. The counted nodes are memory-limited anonymous devices (called agents) communicating asynchronously in pairs (according to a fairness condition). Moreover, we assume that these agents are prone to failures so that they cannot be correctly initialized. This study considers two classical fairness conditions, and for each we investigate the issue of time optimality of (exact) counting given optimal space. First, with randomly interacting agents (probabilistic fairness), we present a ``non-guessing'' time optimal protocol of O(n log n) expected interactions given an optimal space of only one bit (for a population of size n). We prove the time optimality of such protocol. Then, under weak fairness (where every pair of agents interacts infinitely often), we show that a space optimal (semi-uniform) solution cannot converge faster than in Ω(2n) time, in terms of non-null transitions (i.e, the transitions that affect the states of the interacting agents). This result together with the time complexity analysis of an already known space optimal protocol shows that it is also optimal in time (given the optimal space constraints).  more » « less
Award ID(s):
1650596 1637385
NSF-PAR ID:
10026086
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Leibniz international proceedings in informatics
Volume:
70
ISSN:
1868-8969
Page Range / eLocation ID:
13:1-13:17
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Attiya, Hagit (Ed.)
    The standard population protocol model assumes that when two agents interact, each observes the entire state of the other agent. We initiate the study of the message complexity for population protocols, where the state of an agent is divided into an externally-visible message and an internal component, where only the message can be observed by the other agent in an interaction. We consider the case of O(1) message complexity. When time is unrestricted, we obtain an exact characterization of the stably computable predicates based on the number of internal states s(n): If s(n) = o(n) then the protocol computes a semilinear predicate (unlike the original model, which can compute non-semilinear predicates with s(n) = O(log n)), and otherwise it computes a predicate decidable by a nondeterministic O(n log s(n))-space-bounded Turing machine. We then consider time complexity, introducing novel O(polylog(n)) expected time protocols for junta/leader election and general purpose broadcast correct with high probability, and approximate and exact population size counting correct with probability 1. Finally, we show that the main constraint on the power of bounded-message-size protocols is the size of the internal states: with unbounded internal states, any computable function can be computed with probability 1 in the limit by a protocol that uses only one-bit messages. 
    more » « less
  2. We consider the problem of fairly allocating a set of indivisible goods among n agents. Various fairness notions have been proposed within the rapidly growing field of fair division, but the Nash social welfare (NSW) serves as a focal point. In part, this follows from the ‘unreasonable’ fairness guarantees provided, in the sense that a max NSW allocation meets multiple other fairness metrics simultaneously, all while satisfying a standard economic concept of efficiency, Pareto optimality. However, existing approximation algorithms fail to satisfy all of the remarkable fairness guarantees offered by a max NSW allocation, instead targeting only the specific NSW objective. We address this issue by presenting a 2 max NSW, Prop-1, 1/(2n) MMS, and Pareto optimal allocation in strongly polynomial time. Our techniques are based on a market interpretation of a fractional max NSW allocation. We present novel definitions of fairness concepts in terms of market prices, and design a new scheme to round a market equilibrium into an integral allocation in a way that provides most of the fairness properties of an integral max NSW allocation. 
    more » « less
  3. We consider the problem of fairly allocating a set of indivisible goods among n agents. Various fairness notions have been proposed within the rapidly growing field of fair division, but the Nash social welfare (NSW) serves as a focal point. In part, this follows from the 'unreasonable' fairness guarantees provided, in the sense that a max NSW allocation meets multiple other fairness metrics simultaneously, all while satisfying a standard economic concept of efficiency, Pareto optimality. However, existing approximation algorithms fail to satisfy all of the remarkable fairness guarantees offered by a max NSW allocation, instead targeting only the specific NSW objective. We address this issue by presenting a 2 max NSW, Prop-1, 1/(2n) MMS, and Pareto optimal allocation in strongly polynomial time. Our techniques are based on a market interpretation of a fractional max NSW allocation. We present novel definitions of fairness concepts in terms of market prices, and design a new scheme to round a market equilibrium into an integral allocation that provides most of the fairness properties of an integral max NSW allocation. 

     
    more » « less
  4. Population protocols are a popular model of distributed computing, in which randomly-interacting agents with little computational power cooperate to jointly perform computational tasks. Inspired by developments in molecular computation, and in particular DNA computing, recent algorithmic work has focused on the complexity of solving simple yet fundamental tasks in the population model, such as leader election (which requires convergence to a single agent in a special “leader” state), and majority (in which agents must converge to a decision as to which of two possible initial states had higher initial count). Known results point towards an inherent trade-off between the time complexity of such algorithms, and the space complexity, i.e. size of the memory available to each agent. In this paper, we explore this trade-off and provide new upper and lower bounds for majority and leader election. First, we prove a unified lower bound, which relates the space available per node with the time complexity achievable by a protocol: for instance, our result implies that any protocol solving either of these tasks for n agents using O(log log n) states must take Ω(n/polylogn) expected time. This is the first result to characterize time complexity for protocols which employ super-constant number of states per node, and proves that fast, poly-logarithmic running times require protocols to have relatively large space costs. On the positive side, we give algorithms showing that fast, poly-logarithmic convergence time can be achieved using O (log2 n) space per node, in the case of both tasks. Overall, our results highlight a time complexity separation between O (log log n) and Θ(log2 n) state space size for both majority and leader election in population protocols, and introduce new techniques, which should be applicable more broadly. 
    more » « less
  5. We study social choice rules under the utilitarian distortion framework, with an additional metric assumption on the agents' costs over the alternatives. In this approach, these costs are given by an underlying metric on the set of all agents plus alternatives. Social choice rules have access to only the ordinal preferences of agents but not the latent cardinal costs that induce them. Distortion is then defined as the ratio between the social cost (typically the sum of agent costs) of the alternative chosen by the mechanism at hand, and that of the optimal alternative chosen by an omniscient algorithm. The worst-case distortion of a social choice rule is, therefore, a measure of how close it always gets to the optimal alternative without any knowledge of the underlying costs. Under this model, it has been conjectured that Ranked Pairs, the well-known weighted-tournament rule, achieves a distortion of at most 3 (Anshelevich et al. 2015). We disprove this conjecture by constructing a sequence of instances which shows that the worst-case distortion of Ranked Pairs is at least 5. Our lower bound on the worst-case distortion of Ranked Pairs matches a previously known upper bound for the Copeland rule, proving that in the worst case, the simpler Copeland rule is at least as good as Ranked Pairs. And as long as we are limited to (weighted or unweighted) tournament rules, we demonstrate that randomization cannot help achieve an expected worst-case distortion of less than 3. Using the concept of approximate majorization within the distortion framework, we prove that Copeland and Randomized Dictatorship achieve low constant factor fairness-ratios (5 and 3 respectively), which is a considerable generalization of similar results for the sum of costs and single largest cost objectives. In addition to all of the above, we outline several interesting directions for further research in this space. 
    more » « less