skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Asymptotically Optimal Adversarial Strategies for the Probability Estimation Framework
The probability estimation framework involves direct estimation of the probability of occurrences of outcomes conditioned on measurement settings and side information. It is a powerful tool for certifying randomness in quantum nonlocality experiments. In this paper, we present a self-contained proof of the asymptotic optimality of the method. Our approach refines earlier results to allow a better characterisation of optimal adversarial attacks on the protocol. We apply these results to the (2,2,2) Bell scenario, obtaining an analytic characterisation of the optimal adversarial attacks bound by no-signalling principles, while also demonstrating the asymptotic robustness of the PEF method to deviations from expected experimental behaviour. We also study extensions of the analysis to quantum-limited adversaries in the (2,2,2) Bell scenario and no-signalling adversaries in higher (n,m,k) Bell scenarios.  more » « less
Award ID(s):
1839223
PAR ID:
10467309
Author(s) / Creator(s):
;
Publisher / Repository:
MDPI
Date Published:
Journal Name:
Entropy
Volume:
25
Issue:
9
ISSN:
1099-4300
Page Range / eLocation ID:
1291
Subject(s) / Keyword(s):
device-independent quantum random number generation quantum nonlocality Bell inequalities asymptotic equipartition property min-entropy
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract We classify the extreme points of a polytope of probability distributions in the (2,2,2) CHSH-Bell setting that is induced by a single Tsirelson bound. We do the same for a class of polytopes obtained from a parametrized family of multiple Tsirelson bounds interacting non-trivially. Such constructions can be applied to device-independent random number generation using the method of probability estimation factors (2018 Phys. Rev. A98040304(R)). We demonstrate a meaningful improvement in certified randomness applying the new polytopes characterized here. 
    more » « less
  2. Quantum threshold theorems impose hard limits on the hardware capabilities to process quantum information. We derive tight and fundamental upper bounds to loss-tolerance thresholds in different linear-optical quantum information processing settings through an adversarial framework, taking into account the intrinsically probabilistic nature of linear optical Bell measurements. For logical Bell state measurements—ubiquitous operations in photonic quantum information—we demonstrate analytically that linear optics can achieve the fundamental loss threshold imposed by the no-cloning theorem even though, following the work of Lee et al. [Phys. Rev. A 100, 052303 (2019)] the constraint was widely assumed to be stricter. We spotlight the assumptions of the latter publication and find their bound holds for a logical Bell measurement built from adaptive physical linear-optical Bell measurements. We also give an explicit even stricter bound for nonadaptive Bell measurements. 
    more » « less
  3. Deep Neural Networks (DNNs) have shown phenomenal success in a wide range of real-world applications. However, a concerning weakness of DNNs is that they are vulnerable to adversarial attacks. Although there exist methods to detect adversarial attacks, they often suffer constraints on specific attack types and provide limited information to downstream systems. We specifically note that existing adversarial detectors are often binary classifiers, which differentiate clean or adversarial examples. However, detection of adversarial examples is much more complicated than such a scenario. Our key insight is that the confidence probability of detecting an input sample as an adversarial example will be more useful for the system to properly take action to resist potential attacks. In this work, we propose an innovative method for fast confidence detection of adversarial attacks based on integrity of sensor pattern noise embedded in input examples. Experimental results show that our proposed method is capable of providing a confidence distribution model of most of popular adversarial attacks. Furthermore, our presented method can provide early attack warning with even the attack types based on different properties of the confidence distribution models. Since fast confidence detection is a computationally heavy task, we propose an FPGA-Based hardware architecture based on a series of optimization techniques, such as incremental multi-level quantization and etc. We realize our proposed method on an FPGA platform and achieve a high efficiency of 29.740 IPS/W with a power consumption of only 0.7626W. 
    more » « less
  4. Machine learning models are vulnerable to both security attacks (e.g., adversarial examples) and privacy attacks (e.g., private attribute inference). We take the first step to mitigate both the security and privacy attacks, and maintain task utility as well. Particularly, we propose an information-theoretic framework to achieve the goals through the lens of representation learning, i.e., learning representations that are robust to both adversarial examples and attribute inference adversaries. We also derive novel theoretical results under our framework, e.g., the inherent trade-off between adversarial robustness/utility and attribute privacy, and guaranteed attribute privacy leakage against attribute inference adversaries. 
    more » « less
  5. The rise of foundation models fine-tuned on human feedback from potentially untrusted users has increased the risk of adversarial data poisoning, necessitating the study of robustness of learning algorithms against such attacks. Existing research on provable certified robustness against data poisoning attacks primarily focuses on certifying robustness for static adversaries who modify a fraction of the dataset used to train the model before the training algorithm is applied. In practice, particularly when learning from human feedback in an online sense, adversaries can observe and react to the learning process and inject poisoned samples that optimize adversarial objectives better than when they are restricted to poisoning a static dataset once, before the learning algorithm is applied. Indeed, it has been shown in prior work that online dynamic adversaries can be significantly more powerful than static ones. We present a novel framework for computing certified bounds on the impact of dynamic poisoning, and use these certificates to design robust learning algorithms. We give an illustration of the framework for the mean estimation problem and binary classification problems and outline directions for extending this in further work. 
    more » « less