skip to main content

Title: Physical Layer Authentication via Fingerprint Embedding: Min-Entropy Analysis
One of the difficulties of implementing and analyzing algorithms that achieve information theoretic limits is adapting asymptotic results to the finite block-length regime. Results on secrecy for both regimes utilize Shannon entropy and mutual information as metrics for security. In this paper, we determine that Shannon entropy does not necessarily have equal utility for wireless authentication in finite block-length regimes with a focus on the fingerprint embedding framework. Then, we apply a new security performance metric to the framework that is linked to min-entropy rather than Shannon entropy and is similar to cheating probability used in the literature. The metric is based upon an adversary's ability to correctly guess the secret key over many observations using maximum likelihood decoding. We demonstrate the effect that system parameters such as the length of the key and the identification tag have on an adversary's ability to attack successfully. We find that if given a large key, it is better to use it all at once, than to use some and then renew the key with the remaining bits after a certain number of transmissions.  more » « less
Award ID(s):
1744129 1702555
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Conference on Information Science and Systems
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Mission-critical exploration of uncertain environments requires reliable and robust mechanisms for achieving information gain. Typical measures of information gain such as Shannon entropy and KL divergence are unable to distinguish between different bimodal probability distributions or introduce bias toward one mode of a bimodal probability distribution. The use of a standard deviation (SD) metric reduces bias while retaining the ability to distinguish between higher and lower risk distributions. Areas of high SD can be safely explored through observation with an autonomous Mars Helicopter allowing safer and faster path plans for ground-based rovers. First, this study presents a single-agent information-theoretic utility-based path planning method for a highly correlated uncertain environment. Then, an information-theoretic two-stage multiagent rapidly exploring random tree framework is presented, which guides Mars helicopter through regions of high SD to reduce uncertainty for the rover. In a Monte Carlo simulation, we compare our information-theoretic framework with a rover-only approach and a naive approach, in which the helicopter scouts ahead of the rover along its planned path. Finally, the model is demonstrated in a case study on the Jezero region of Mars. Results show that the information-theoretic helicopter improves the travel time for the rover on average when compared with the rover alone or with the helicopter scouting ahead along the rover’s initially planned route. 
    more » « less
  2. Many cyber attack actions can be observed but the observables often exhibit intricate feature dependencies, non-homogeneity, and potential for rare yet critical samples. This work tests the ability to model and synthesize cyber intrusion alerts through Generative Adversarial Networks (GANs), which explore the feature space through reconciling between randomly generated samples and the given data that reflects a mixture of diverse attack behaviors. Through a comprehensive analysis using Jensen-Shannon Divergence (JSD), conditional and joint entropy, and mode drops and additions, we show that the Wasserstein-GAN with Gradient Penalty and Mutual Information (WGAN-GPMI) is more effective in learning to generate realistic alerts than models without Mutual Information constraints. The added Mutual Information constraint pushes the model to explore the feature space more thoroughly and increases the generation of low probability yet critical alert features. By mapping alerts to a set of attack stages it is shown that the output of these low probability alerts has a direct contextual meaning for cyber security analysts. Overall, our results show the promising novel use of GANs to learn from limited yet diverse intrusion alerts to generate synthetic ones that emulate critical dependencies, opening the door to data driven network threat models. 
    more » « less
  3. Abstract Aim

    Quantifying abundance distributions is critical for understanding both how communities assemble, and how community structure varies through time and space, yet estimating abundances requires considerable investment in fieldwork. Community‐level population genetic data potentially offer a powerful way to indirectly infer richness, abundance and the history of accumulation of biodiversity within a community. Here we introduce a joint model linking neutral community assembly and comparative phylogeography to generate both community‐level richness, abundance and genetic variation under a neutral model, capturing both equilibrium and non‐equilibrium dynamics.




    Our model combines a forward‐time individual‐based community assembly process with a rescaled backward‐time neutral coalescent model of multi‐taxa population genetics. We explore general dynamics of genetic and abundance‐based summary statistics and use approximate Bayesian computation (ABC) to estimate parameters underlying the model of island community assembly. Finally, we demonstrate two applications of the model using community‐scale mtDNAsequence data and densely sampled abundances of an arachnid community on La Réunion. First, we use genetic data alone to estimate a summary of the abundance distribution, ground‐truthing this against the observed abundances. Then, we jointly use the observed genetic data and abundances to estimate the proximity of the community to equilibrium.


    Simulation experiments of ourABCprocedure demonstrate that coupling abundance with genetic data leads to improved accuracy and precision of model parameter estimates compared with using abundance‐only data. We further demonstrate reasonable precision and accuracy in estimating a metric underlying the shape of the abundance distribution, temporal progress towards local equilibrium and several key parameters of the community assembly process. For the insular arachnid assemblage, we find the joint distribution of genetic diversity and abundance approaches equilibrium expectations, and that the Shannon entropy of the observed abundances can be estimated using genetic data alone.

    Main conclusions

    The framework that we present unifies neutral community assembly and comparative phylogeography to characterize the community‐level distribution of both abundance and genetic variation through time, providing a resource that should greatly enhance understanding of both the processes structuring ecological communities and the associated aggregate demographic histories.

    more » « less
  4. Abstract Recent work has shown that Tor is vulnerable to attacks that manipulate inter-domain routing to compromise user privacy. Proposed solutions such as Counter-RAPTOR [29] attempt to ameliorate this issue by favoring Tor entry relays that have high resilience to these attacks. However, because these defenses bias Tor path selection on the identity of the client, they invariably leak probabilistic information about client identities. In this work, we make the following contributions. First, we identify a novel means to quantify privacy leakage in guard selection algorithms using the metric of Max-Divergence. Max-Divergence ensures that probabilistic privacy loss is within strict bounds while also providing composability over time. Second, we utilize Max-Divergence and multiple notions of entropy to understand privacy loss in the worst-case for Counter-RAPTOR. Our worst-case analysis provides a fresh perspective to the field, as prior work such as Counter-RAPTOR only analyzed average case-privacy loss. Third, we propose modifications to Counter-RAPTOR that incorporate worst-case Max-Divergence in its design. Specifically, we utilize the exponential mechanism (a mechanism for differential privacy) to guarantee a worst-case bound on Max-Divergence/privacy loss. For the quality function used in the exponential mechanism, we show that a Monte-Carlo sampling-based method for stochastic optimization can be used to improve multi-dimensional trade-offs between security, privacy, and performance. Finally, we demonstrate that compared to Counter-RAPTOR, our approach achieves an 83% decrease in Max-Divergence after one guard selection and a 245% increase in worst-case Shannon entropy after 5 guard selections. Notably, experimental evaluations using the Shadow emulator shows that our approach provides these privacy benefits with minimal impact on system performance. 
    more » « less
  5. Abstract The f -invariant is an isomorphism invariant of free-group measure-preserving actions introduced by Lewis Bowen, who first used it to show that two finite-entropy Bernoulli shifts over a finitely generated free group can be isomorphic only if their base measures have the same Shannon entropy. Bowen also showed that the f -invariant is a variant of sofic entropy; in particular, it is the exponential growth rate of the expected number of good models over a uniform random homomorphism. In this paper we present an analogous formula for the relative f -invariant and use it to prove a formula for the exponential growth rate of the expected number of good models over a random sofic approximation which is a type of stochastic block model. 
    more » « less