Impostors are attackers who take over a smartphone and gain access to the legitimate user’s confidential and private information. This paper proposes a defense-in-depth mechanism to detect impostors quickly with simple Deep Learning algorithms, which can achieve better detection accuracy than the best prior work which used Machine Learning algorithms requiring computation of multiple features. Different from previous work, we then consider protecting the privacy of a user’s behavioral (sensor) data by not exposing it outside the smartphone. For this scenario, we propose a Recurrent Neural Network (RNN) based Deep Learning algorithm that uses only the legitimate user’s sensor data to learn his/her normal behavior. We propose to use Prediction Error Distribution (PED) to enhance the detection accuracy. We also show how a minimalist hardware module, dubbed SID for Smartphone Impostor Detector, can be designed and integrated into smartphones for self-contained impostor detection. Experimental results show that SID can support real-time impostor detection, at a very low hardware cost and energy consumption, compared to other RNN accelerators.
more »
« less
Measures of Information Leakage for Incomplete Statistical Information: Application to a Binary Privacy Mechanism
Information leakageis usually defined as the logarithmic increment in the adversary’s probability of correctly guessing the legitimate user’s private data or some arbitrary function of the private data when presented with the legitimate user’s publicly disclosed information. However, this definition of information leakage implicitly assumes that both the privacy mechanism and the prior probability of the original data are entirely known to the attacker. In reality, the assumption of complete knowledge of the privacy mechanism for an attacker is often impractical. The attacker can usually have access to only an approximate version of the correct privacy mechanism, computed from a limited set of the disclosed data, for which they can access the corresponding un-distorted data. In this scenario, the conventional definition of leakage no longer has an operational meaning. To address this problem, in this article, we propose novel meaningful information-theoretic metrics for information leakage when the attacker hasincomplete informationabout the privacy mechanism—we call themaverage subjective leakage,average confidence boost, andaverage objective leakage, respectively. For the simplest, binary scenario, we demonstrate how to find an optimized privacy mechanism that minimizes the worst-case value of either of these leakages.
more »
« less
- Award ID(s):
- 2030249
- PAR ID:
- 10488083
- Publisher / Repository:
- Association for Computing Machinery
- Date Published:
- Journal Name:
- ACM Transactions on Privacy and Security
- Volume:
- 26
- Issue:
- 4
- ISSN:
- 2471-2566
- Page Range / eLocation ID:
- 1 to 31
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Maximal α-leakage is a tunable measure of information leakage based on the quality of an adversary's belief about an arbitrary function of private data based on public data. The parameter α determines the loss function used to measure the quality of a belief, ranging from log-loss at α = 1 to the probability of error at α = ∞. We review its definition and main properties, including extensions to α <; 1, robustness to side information, and relationship to Rényi differential privacy.more » « less
-
We study the problem of weakly private information retrieval (PIR) when there is heterogeneity in servers’ trustfulness under the maximal leakage (Max-L) metric. A user wishes to retrieve a desired message from N non-colluding servers efficiently, such that the identity of the desired message is not leaked in a significant manner; however, some servers can be more trustworthy than others. We propose a code construction for this setting and optimize the probability distribution for this construction. It is shown that the optimal probability allocation for the proposed scheme essentially separates the delivery patterns into two parts: a completely private part that has the same download overhead as the capacity-achieving PIR code, and a non-private part that allows complete privacy leakage but has no download overhead by downloading only from the most trustful server. The optimal solution is established through a sophisticated analysis of the underlying convex optimization problem, and a reduction between the homogeneous setting and the heterogeneous setting.more » « less
-
Private information retrieval (PIR) allows users to retrieve data from databases without revealing the identity of that data. An extensive body of works has investigated efficient schemes to achieve computational and information-theoretic privacy. The latter guarantees that no information is revealed to the databases, irrespective of their computational power. Although information-theoretic PIR (IT-PIR) provides a strong privacy guarantee, it can be too taxing for certain applications. In this paper, we initiate the study of leaky private information retrieval (L-PIR), where a bounded amount of privacy leakage is allowed and measured through a parameter ε. The classical IT-PIR formulation is obtained by setting ε = 0, and for ε > 0, we explore the opportunities offered for reducing the download cost. We derive new upper and lower bounds on the download cost of L-PIR for any arbitrary ε, any number of messages K, and for N = 2 databases.more » « less
-
This paper studies the problem of information design in a general security game setting in which multiple self-interested defenders attempt to provide protection simultaneously for the same set of important targets against an unknown attacker. A principal, who can be one of the defenders, has access to certain private information (i.e., attacker type), whereas other defenders do not. We investigate the question of how that principal, with additional private information, can influence the decisions of the defenders by partially and strategically revealing her information. In particular, we develop a polynomial time ellipsoid algorithm to compute an optimal private signaling scheme. Our key finding is that the separation oracle in the ellipsoid approach can be carefully reduced to bipartite matching. Furthermore, we introduce a compact representation of any ex ante persuasive signaling schemes by exploiting intrinsic security resource allocation structures, enabling us to compute an optimal scheme significantly faster. Our experiment results show that by strategically revealing private information, the principal can significantly enhance the protection effectiveness for the targets.more » « less
An official website of the United States government

