skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Covert Sequential Hypothesis Testing
We consider the problem of covert sequential testing, in which a legitimate party attempts to run a sequential test while escaping detection from an adversary. Specifically, the legitimate party’s decisions should meet prescribed risk constraints and, simultaneously, the adversary’s observations induced by the test should remain indistinguishable from the observations obtained in the absence of a test. Our main result is the characterization of the risk exponent γθ, which captures the asymptotic exponential decrease of the risk with the square-root of the averaged stopping time in the limit of low risk. An example is provided to illustrate how the covertness constraint influences the design of the sequential test.  more » « less
Award ID(s):
1955401
PAR ID:
10312110
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of IEEE Information Theory Workshop
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Purpose The experiment reported here compared two hypotheses for the poor statistical and artificial grammar learning often seen in children and adults with developmental language disorder (DLD; also known as specific language impairment). The procedural learning deficit hypothesis states that implicit learning of rule-based input is impaired, whereas the sequential pattern learning deficit hypothesis states that poor performance is only seen when learners must implicitly compute sequential dependencies. The current experiment tested learning of an artificial grammar that could be learned via feature activation, as observed in an associatively organized lexicon, without computing sequential dependencies and should therefore be learnable on the sequential pattern learning deficit hypothesis, but not on the procedural learning deficit hypothesis. Method Adults with DLD and adults with typical language development (TD) listened to consonant–vowel–consonant–vowel familiarization words from one of two artificial phonological grammars: Family Resemblance (two out of three features) and a control (exclusive OR, in which both consonants are voiced OR both consonants are voiceless) grammar in which no learning was predicted for either group. At test, all participants rated 32 test words as to whether or not they conformed to the pattern in the familiarization words. Results Adults with DLD and adults with TD showed equal and robust learning of the Family Resemblance grammar, accepting significantly more conforming than nonconforming test items. Both groups who were familiarized with the Family Resemblance grammar also outperformed those who were familiarized with the OR grammar, which, as predicted, was learned by neither group. Conclusion Although adults and children with DLD often underperform, compared to their peers with TD, on statistical and artificial grammar learning tasks, poor performance appears to be tied to the implicit computation of sequential dependencies, as predicted by the sequential pattern learning deficit hypothesis. 
    more » « less
  2. Traditional accelerated life test plans are typically based on optimizing the C-optimality for minimizing the variance of an interested quantile of the lifetime distribution. These methods often rely on some specified planning values for the model parameters, which are usually unknown prior to the actual tests. The ambiguity of the specified parameters can lead to suboptimal designs for optimizing the reliability performance of interest. In this paper, we propose a sequential design strategy for life test plans based on considering dual objectives. In the early stage of the sequential experiment, we suggest allocating more design locations based on optimizing the D-optimality to quickly gain precision in the estimated model parameters. In the later stage of the experiment, we can allocate more observations based on optimizing the C-optimality to maximize the precision of the estimated quantile of the lifetime distribution. We compare the proposed sequential design strategy with existing test plans considering only a single criterion and illustrate the new method with an example on the fatigue testing of polymer composites. 
    more » « less
  3. Pantea, Casian (Ed.)
    Limited testing capacity for COVID-19 has hampered the pandemic response. Pooling is a testing method wherein samples from specimens (e.g., swabs) from multiple subjects are combined into a pool and screened with a single test. If the pool tests positive, then new samples from the collected specimens are individually tested, while if the pool tests negative, the subjects are classified as negative for the disease. Pooling can substantially expand COVID-19 testing capacity and throughput, without requiring additional resources. We develop a mathematical model to determine the best pool size for different risk groups , based on each group’s estimated COVID-19 prevalence. Our approach takes into consideration the sensitivity and specificity of the test, and a dynamic and uncertain prevalence, and provides a robust pool size for each group. For practical relevance, we also develop a companion COVID-19 pooling design tool (through a spread sheet). To demonstrate the potential value of pooling, we study COVID-19 screening using testing data from Iceland for the period, February-28-2020 to June-14-2020, for subjects stratified into high- and low-risk groups. We implement the robust pooling strategy within a sequential framework, which updates pool sizes each week, for each risk group, based on prior week’s testing data. Robust pooling reduces the number of tests, over individual testing, by 88.5% to 90.2%, and 54.2% to 61.9%, respectively, for the low-risk and high-risk groups (based on test sensitivity values in the range [0.71, 0.98] as reported in the literature). This results in much shorter times, on average, to get the test results compared to individual testing (due to the higher testing throughput), and also allows for expanded screening to cover more individuals. Thus, robust pooling can potentially be a valuable strategy for COVID-19 screening. 
    more » « less
  4. Abstract Two-sample tests are important areas aiming to determine whether two collections of observations follow the same distribution or not. We propose two-sample tests based on integral probability metric (IPM) for high-dimensional samples supported on a low-dimensional manifold. We characterize the properties of proposed tests with respect to the number of samples $$n$$ and the structure of the manifold with intrinsic dimension $$d$$. When an atlas is given, we propose a two-step test to identify the difference between general distributions, which achieves the type-II risk in the order of $$n^{-1/\max \{d,2\}}$$. When an atlas is not given, we propose Hölder IPM test that applies for data distributions with $$(s,\beta )$$-Hölder densities, which achieves the type-II risk in the order of $$n^{-(s+\beta )/d}$$. To mitigate the heavy computation burden of evaluating the Hölder IPM, we approximate the Hölder function class using neural networks. Based on the approximation theory of neural networks, we show that the neural network IPM test has the type-II risk in the order of $$n^{-(s+\beta )/d}$$, which is in the same order of the type-II risk as the Hölder IPM test. Our proposed tests are adaptive to low-dimensional geometric structure because their performance crucially depends on the intrinsic dimension instead of the data dimension. 
    more » « less
  5. “Break-glass” is a term used in IT healthcare systems to denote an emergency access to private information without having the credentials to do so. In this paper we introduce the concept of break-glass encryption for cloud storage, where the security of the ciphertexts – stored on a cloud – can be violated exactly once, for emergency circumstances, in a way that is detectable and without relying on a trusted party. Detectability is the crucial property here: if a cloud breaks glass without permission from the legitimate user, the latter should detect it and have a proof of such violation. However, if the break-glass procedure is invoked by the legitimate user, then semantic security must still hold and the cloud will learn nothing. Distinguishing that a break-glass is requested by the legitimate party is also challenging in absence of secrets. In this paper, we provide a formalization of break-glass encryption and a secure instantiation using hardware tokens. Our construction aims to be a feasibility result and is admittedly impractical. Whether hardware tokens are necessary to achieve this security notion and whether more practical solutions can be devised are interesting open questions. 
    more » « less