skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Byzantine Agreement with Optimal Resilience via Statistical Fraud Detection
Since the mid-1980s it has been known that Byzantine Agreement can be solved with probability 1 asynchronously, even against an omniscient, computationally unbounded adversary that can adaptivelycorruptup tof < n/3parties. Moreover, the problem is insoluble withf ≥ n/3corruptions. However, Bracha’s [13] 1984 protocol (see also Ben-Or [8]) achievedf < n/3resilience at the cost ofexponentialexpected latency2Θ (n), a bound that hasneverbeen improved in this model withf = ⌊ (n-1)/3 ⌋corruptions. In this article, we prove that Byzantine Agreement in the asynchronous, full information model can be solved with probability 1 against an adaptive adversary that can corruptf < n/3parties, while incurring onlypolynomial latency with high probability. Our protocol follows an earlier polynomial latency protocol of King and Saia [33,34], which hadsuboptimalresilience, namelyf ≈ n/109 [33,34]. Resiliencef = (n-1)/3is uniquely difficult, as this is the point at which the influence of the Byzantine and honest players are of roughly equal strength. The core technical problem we solve is to design a collective coin-flipping protocol thateventuallylets us flip a coin with an unambiguous outcome. In the beginning, the influence of the Byzantine players is too powerful to overcome, and they can essentially fix the coin’s behavior at will. We guarantee that after just a polynomial number of executions of the coin-flipping protocol, either (a) the Byzantine players fail to fix the behavior of the coin (thereby ending the game) or (b) we can “blacklist” players such that the blacklisting rate for Byzantine players is at least as large as the blacklisting rate for good players. The blacklisting criterion is based on a simple statistical test offraud detection.  more » « less
Award ID(s):
2221980
PAR ID:
10522104
Author(s) / Creator(s):
; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
Journal of the ACM
Volume:
71
Issue:
2
ISSN:
0004-5411
Page Range / eLocation ID:
1 to 37
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the well-known Lieb-Liniger (LL) model for \begin{document}$ N $$\end{document} bosons interacting pairwise on the line via the \begin{document}$$ \delta $$\end{document} potential in the mean-field scaling regime. Assuming suitable asymptotic factorization of the initial wave functions and convergence of the microscopic energy per particle, we show that the time-dependent reduced density matrices of the system converge in trace norm to the pure states given by the solution to the one-dimensional cubic nonlinear Schrödinger equation (NLS) with an explict rate of convergence. In contrast to previous work [3] relying on the formalism of second quantization and coherent states and without an explicit rate, our proof is based on the counting method of Pickl [65,66,67] and Knowles and Pickl [44]. To overcome difficulties stemming from the singularity of the \begin{document}$$ \delta $$\end{document} potential, we introduce a new short-range approximation argument that exploits the Hölder continuity of the \begin{document}$$ N $$\end{document}-body wave function in a single particle variable. By further exploiting the \begin{document}$$ L^2 $$\end{document}-subcritical well-posedness theory for the 1D cubic NLS, we can prove mean-field convergence when the limiting solution to the NLS has finite mass, but only for a very special class of \begin{document}$$ N $$\end{document}$-body initial states. 
    more » « less
  2. We consider the linear third order (in time) PDE known as the SMGTJ-equation, defined on a bounded domain, under the action of either Dirichlet or Neumann boundary control \begin{document}$ g $$\end{document}. Optimal interior and boundary regularity results were given in [1], after [41], when \begin{document}$$ g \in L^2(0, T;L^2(\Gamma)) \equiv L^2(\Sigma) $$\end{document}, which, moreover, in the canonical case \begin{document}$$ \gamma = 0 $$\end{document}, were expressed by the well-known explicit representation formulae of the wave equation in terms of cosine/sine operators [19], [17], [24,Vol Ⅱ]. The interior or boundary regularity theory is however the same, whether \begin{document}$$ \gamma = 0 $$\end{document} or \begin{document}$$ 0 \neq \gamma \in L^{\infty}(\Omega) $$\end{document}, since \begin{document}$$ \gamma \neq 0 $$\end{document} is responsible only for lower order terms. Here we exploit such cosine operator based-explicit representation formulae to provide optimal interior and boundary regularity results with \begin{document}$$ g $$\end{document} "smoother" than \begin{document}$$ L^2(\Sigma) $$\end{document}, qualitatively by one unit, two units, etc. in the Dirichlet boundary case. To this end, we invoke the corresponding results for wave equations, as in [17]. Similarly for the Neumann boundary case, by invoking the corresponding results for the wave equation as in [22], [23], [37] for control smoother than \begin{document}$$ L^2(0, T;L^2(\Gamma)) $$\end{document}, and [44] for control less regular in space than \begin{document}$$ L^2(\Gamma) $$\end{document}$. In addition, we provide optimal interior and boundary regularity results when the SMGTJ equation is subject to interior point control, by invoking the corresponding wave equations results [42], [24,Section 9.8.2]. 
    more » « less
  3. Algorithms often have tunable parameters that impact performance metrics such as runtime and solution quality. For many algorithms used in practice, no parameter settings admit meaningful worst-case bounds, so the parameters are made available for the user to tune. Alternatively, parameters may be tuned implicitly within the proof of a worst-case approximation ratio or runtime bound. Worst-case instances, however, may be rare or nonexistent in practice. A growing body of research has demonstrated that a data-driven approach to parameter tuning can lead to significant improvements in performance. This approach uses atraining setof problem instances sampled from an unknown, application-specific distribution and returns a parameter setting with strong average performance on the training set. We provide techniques for derivinggeneralization guaranteesthat bound the difference between the algorithm’s average performance over the training set and its expected performance on the unknown distribution. Our results apply no matter how the parameters are tuned, be it via an automated or manual approach. The challenge is that for many types of algorithms, performance is a volatile function of the parameters: slightly perturbing the parameters can cause a large change in behavior. Prior research [e.g.,12,16,20,62] has proved generalization bounds by employing case-by-case analyses of greedy algorithms, clustering algorithms, integer programming algorithms, and selling mechanisms. We streamline these analyses with a general theorem that applies whenever an algorithm’s performance is a piecewise-constant, piecewise-linear, or—more generally—piecewise-structuredfunction of its parameters. Our results, which are tight up to logarithmic factors in the worst case, also imply novel bounds for configuring dynamic programming algorithms from computational biology. 
    more » « less
  4. Following closely the classical works [5]-[7] by Glassey, Strauss, and Schaeffer, we present a version of the Glassey-Strauss representation for the Vlasov-Maxwell systems in a 3D half space when the boundary is the perfect conductor. 
    more » « less
  5. The goal of motion tomography is to recover a description of a vector flow field using measurements along the trajectory of a sensing unit. In this paper, we develop a predictor corrector algorithm designed to recover vector flow fields from trajectory data with the use of occupation kernels developed by Rosenfeld et al. [9,10]. Specifically, we use the occupation kernels as an adaptive basis; that is, the trajectories defining our occupation kernels are iteratively updated to improve the estimation in the next stage. Initial estimates are established, then under mild assumptions, such as relatively straight trajectories, convergence is proven using the Contraction Mapping Theorem. We then compare the developed method with the established method by Chang et al. [5] by defining a set of error metrics. We found that for simulated data, where a ground truth is available, our method offers a marked improvement over [5]. For a real-world example, where ground truth is not available, our results are similar results to the established method. 
    more » « less