skip to main content


Title: Search on a Line by Byzantine Robots
We consider the problem of fault-tolerant parallel search on an infinite line by n robots. Starting from the origin, the robots are required to find a target at an unknown location. The robots can move with maximum speed 1 and can communicate wirelessly among themselves. However, among the n robots, there are f robots that exhibit byzantine faults. A faulty robot can fail to report the target even after reaching it, or it can make malicious claims about having found the target when in fact it has not. Given the presence of such faulty robots, the search for the target can only be concluded when the non-faulty robots have sufficient evidence that the target has been found. We aim to design algorithms that minimize the value of S_d(n, f), the time to find a target at a (unknown) distance d from the origin by n robots among which f are faulty. We give several different algorithms whose running time depends on the ratio f/n, the density of faulty robots, and also prove lower bounds. Our algorithms are optimal for some densities of faulty robots.  more » « less
Award ID(s):
1813940
NSF-PAR ID:
10231398
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
International Journal of Foundations of Computer Science
ISSN:
0129-0541
Page Range / eLocation ID:
1 to 19
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider evacuation of a group of n ≥ 2 autonomous mobile agents (or robots) from an unknown exit on an infinite line. The agents are initially placed at the origin of the line and can move with any speed up to the maximum speed 1 in any direction they wish and they all can communicate when they are co-located. However, the agents have different wireless communication abilities: while some are fully wireless and can send and receive messages at any distance, a subset of the agents are senders, they can only transmit messages wirelessly, and the rest are receivers, they can only receive messages wirelessly. The agents start at the same time and their communication abilities are known to each other from the start. Starting at the origin of the line, the goal of the agents is to collectively find a target/exit at an unknown location on the line while minimizing the evacuation time, defined as the time when the last agent reaches the target. We investigate the impact of such a mixed communication model on evacuation time on an infinite line for a group of cooperating agents. In particular, we provide evacuation algorithms and analyze the resulting competitive ratio (CR) of the evacuation time for such a group of agents. If the group has two agents of two different types, we give an optimal evacuation algorithm with competitive ratio CR = 3+2√2. If there is a single sender or fully wireless agent, and multiple receivers we prove that CR ∈ [2+√5,5], and if there are multiple senders and a single receiver or fully wireless agent, we show that CR ∈ [3,5.681319]. Any group consisting of only senders or only receivers requires competitive ratio 9, and any other combination of agents has competitive ratio 3. 
    more » « less
  2. Motivated by the rise of quantum computers, existing public-key cryptosystems are expected to be replaced by post-quantum schemes in the next decade in billions of devices. To facilitate the transition, NIST is running a standardization process which is currently in its final Round. Only three digital signature schemes are left in the competition, among which Dilithium and Falcon are the ones based on lattices. Besides security and performance, significant attention has been given to resistance against implementation attacks that target side-channel leakage or fault injection response. Classical fault attacks on signature schemes make use of pairs of faulty and correct signatures to recover the secret key which only works on deterministic schemes. To counter such attacks, Dilithium offers a randomized version which makes each signature unique, even when signing identical messages. In this work, we introduce a novel Signature Correction Attack which not only applies to the deterministic version but also to the randomized version of Dilithium and is effective even on constant-time implementations using AVX2 instructions. The Signature Correction Attack exploits the mathematical structure of Dilithium to recover the secret key bits by using faulty signatures and the public-key. It can work for any fault mechanism which can induce single bit-flips. For demonstration, we are using Rowhammer induced faults. Thus, our attack does not require any physical access or special privileges, and hence could be also implemented on shared cloud servers. Using Rowhammer attack, we inject bit flips into the secret key s1 of Dilithium, which results in incorrect signatures being generated by the signing algorithm. Since we can find the correct signature using our Signature Correction algorithm, we can use the difference between the correct and incorrect signatures to infer the location and value of the flipped bit without needing a correct and faulty pair. To quantify the reduction in the security level, we perform a thorough classical and quantum security analysis of Dilithium and successfully recover 1,851 bits out of 3,072 bits of secret key $s_{1}$ for security level 2. Fully recovered bits are used to reduce the dimension of the lattice whereas partially recovered coefficients are used to to reduce the norm of the secret key coefficients. Further analysis for both primal and dual attacks shows that the lattice strength against quantum attackers is reduced from 2128 to 281 while the strength against classical attackers is reduced from 2141 to 289. Hence, the Signature Correction Attack may be employed to achieve a practical attack on Dilithium (security level 2) as proposed in Round 3 of the NIST post-quantum standardization process. 
    more » « less
  3. Accurately tracking dynamic targets relies on robots accounting for uncertainties in their own states to share information and maintain safety. The problem becomes even more challenging when there are an unknown and time-varying number of targets in the environment. In this paper we address this problem by introducing four new distributed algorithms that allow large teams of robots to: i) run the prediction and ii) update steps of a distributed recursive Bayesian multitarget tracker, iii) determine the set of local neighbors that must exchange data, and iv) exchange data in a consistent manner. All of these algorithms account for a bounded level of localization uncertainty in the robots by leveraging our recent introduction of the convex uncertainty Voronoi (CUV) diagram, which extends the traditional Voronoi diagram to account for localization uncertainty. The CUV diagram introduces a tessellation over the environment, which we use in this work both to distribute the multi-target tracker and to make control decisions about where to search next. We examine the efficacy of our method via a series of simulations and compare them to our previous work which assumed perfect localization. 
    more » « less
  4. Presented at the Workshop on Heterogeneous Multi-Robot Task Allocation and Coordination. The authors recently developed a distributed algorithm to enable a team of homogeneous robots to search for and track an unknown and time-varying number of dynamic targets. This algorithm combined a distributed version of the PHD filter (for multi-target tracking) with Lloyd’s algorithm to drive the motion of the robots. In this paper we extend this previous work to allow a heterogeneous team of groundand aerial robots to perform the search and tracking tasks in a coordinated manner. Both types of robots are equipped with sensors that have a finite field of view and which may receive both false positive and false negative detections. Theaerial robots may vary the size of their sensor field of view (FoV) by changing elevation. This increase in the FoV coincides with a decrease in the accuracy and reliability of the sensor. The ground robots maintain the target tracking information while the aerial robots provide additional sensor coverage. We develop two new distributed algorithms to provide filter updates and to make control decisions in this heterogeneous team. Both algorithms only require robots to communicate with nearby robots and use minimal bandwidth.We demonstrate the efficacy of our approach through a series of simulated experiments which show that the heterogeneous teams are able to achieve more accurate tracking in less time than our previous work. 
    more » « less
  5. Context. Stars evolving along the asymptotic giant branch (AGB) can become carbon rich in the final part of their evolution. The detailed description of their spectra has led to the definition of several spectral types: N, SC, J, and R. To date, differences among them have been partially established only on the basis of their chemical properties. Aims. An accurate determination of the luminosity function (LF) and kinematics together with their chemical properties is extremely important for testing the reliability of theoretical models and establishing on a solid basis the stellar population membership of the different carbon star types. Methods. Using Gaia Data Release 2 ( Gaia DR2) astrometry, we determine the LF and kinematic properties of a sample of 210 carbon stars with different spectral types in the solar neighbourhood with measured parallaxes better than 20%. Their spatial distribution and velocity components are also derived. Furthermore, the use of the infrared Wesenheit function allows us to identify the different spectral types in a Gaia -2MASS diagram. Results. We find that the combined LF of N- and SC-type stars are consistent with a Gaussian distribution peaking at M bol  ∼ −5.2 mag. The resulting LF, however, shows two tails at lower and higher luminosities more extended than those previously found, indicating that AGB carbon stars with solar metallicity may reach M bol  ∼ −6.0 mag. This contrasts with the narrower LF derived in Galactic carbon Miras from previous studies. We find that J-type stars are about half a magnitude fainter on average than N- and SC-type stars, while R-hot stars are half a magnitude brighter than previously found, although fainter in any case by several magnitudes than other carbon types. Part of these differences are due to systematically lower parallaxes measured by Gaia DR2 with respect to H IPPARCOS values, in particular for sources with parallax ϖ < 1 mas. The Galactic spatial distribution and velocity components of the N-, SC-, and J-type stars are very similar, while about 30% of the R-hot stars in the sample are located at distances greater than ∼500 pc from the Galactic plane, and show a significant drift with respect to the local standard of rest. Conclusions. The LF derived for N- and SC-type in the solar neighbourhood fully agrees with the expected luminosity of stars of 1.5−3 M ⊙ on the AGB. On a theoretical basis, the existence of an extended low-luminosity tail would require a contribution of extrinsic low-mass carbon stars, while the high-luminosity tail would imply that stars with mass values up to ∼5 M ⊙ may become carbon stars on the AGB. J-type stars differ significantly not only in their chemical composition with respect to the N- and SC-types, but also in their LF, which reinforces the idea that these carbon stars belong to a different type whose origin is still unknown. The derived luminosities of R-hot stars means that it is unlikely that these stars are in the red-clump, as previously claimed. On the other hand, the derived spatial distribution and kinematic properties, together with their metallicity values, indicate that most of the N-, SC-, and J-type stars belong to the thin disc population, while a significant fraction of R-hot stars show characteristics compatible with the thick disc. 
    more » « less