skip to main content


Search for: All records

Award ID contains: 1657774

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The concern regarding users’ data privacy has risen to its highest level due to the massive increase in communication platforms, social networking sites, and greater users’ participation in online public discourse. An increasing number of people exchange private information via emails, text messages, and social media without being aware of the risks and implications. Researchers in the field of Natural Language Processing (NLP) have concentrated on creating tools and strategies to identify, categorize, and sanitize private information in text data since a substantial amount of data is exchanged in textual form. However, most of the detection methods solely rely on the existence of pre-identified keywords in the text and disregard the inference of underlying meaning of the utterance in a specific context. Hence, in some situations these tools and algorithms fail to detect disclosure, or the produced results are miss classified. In this paper, we propose a multi-input, multi-output hybrid neural network which utilizes transfer-learning, linguistics, and metadata to learn the hidden patterns. Our goal is to better classify disclosure/non-disclosure content in terms of the context of situation. We trained and evaluated our model on a human-annotated ground truth dataset, containing a total of 5,400 tweets. The results show that the proposed model was able to identify privacy disclosure through tweets with an accuracy of 77.4% while classifying the information type of those tweets with an impressive accuracy of 99%, by jointly learning for two separate tasks.

     
    more » « less
  2. In order to create user-centric and personalized privacy management tools, the underlying models must account for individual users’ privacy expectations, preferences, and their ability to control their information sharing activities. Existing studies of users’ privacy behavior modeling attempt to frame the problem from a request’s perspective, which lack the crucial involvement of the information owner, resulting in limited or no control of policy management. Moreover, very few of them take into the consideration the aspect of correctness, explainability, usability, and acceptance of the methodologies for each user of the system. In this paper, we present a methodology to formally model, validate, and verify personalized privacy disclosure behavior based on the analysis of the user’s situational decision-making process. We use a model checking tool named UPPAAL to represent users’ self-reported privacy disclosure behavior by an extended form of finite state automata (FSA), and perform reachability analysis for the verification of privacy properties through computation tree logic (CTL) formulas. We also describe the practical use cases of the methodology depicting the potential of formal technique towards the design and development of user-centric behavioral modeling. This paper, through extensive amounts of experimental outcomes, contributes several insights to the area of formal methods and user-tailored privacy behavior modeling. 
    more » « less
  3. null (Ed.)
    To account for privacy perceptions and preferences in user models and develop personalized privacy systems, we need to understand how users make privacy decisions in various contexts. Existing studies of privacy perceptions and behavior focus on overall tendencies toward privacy, but few have examined the context-specific factors in privacy decision making. We conducted a survey on Mechanical Turk (N=401) based on the theory of planned behavior (TPB) to measure the way users’ perceptions of privacy factors and intent to disclose information are affected by three situational factors embodied hypothetical scenarios: information type, recipients’ role, and trust source. Results showed a positive relationship between subjective norms and perceived behavioral control, and between each of these and situational privacy attitude; all three constructs are significantly positively associated with intent to disclose. These findings also suggest that, situational factors predict participants’ privacy decisions through their influence on the TPB constructs. 
    more » « less
  4. null (Ed.)
    Data and information privacy is a major concern of today’s world. More specifically, users’ digital privacy has become one of the most important issues to deal with, as advancements are being made in information sharing technology. An increasing number of users are sharing information through text messages, emails, and social media without proper awareness of privacy threats and their consequences. One approach to prevent the disclosure of private information is to identify them in a conversation and warn the dispatcher before the conveyance happens between the sender and the receiver. Another way of preventing information (sensitive) loss might be to analyze and sanitize a batch of offline documents when the data is already accumulated somewhere. However, automating the process of identifying user-centric privacy disclosure in textual data is challenging. This is because the natural language has an extremely rich form and structure with different levels of ambiguities. Therefore, we inquire after a potential framework that could bring this challenge within reach by precisely recognizing users’ privacy disclosures in a piece of text by taking into account - the authorship and sentiment (tone) of the content alongside the linguistic features and techniques. The proposed framework is considered as the supporting plugin to help text classification systems more accurately identify text that might disclose the author’s personal or private information. 
    more » « less
  5. null (Ed.)
    Internet usage continues to increase among children ages 12 and younger. Because their digital interactions can be persistently stored, there is a need for building an understanding and foundational knowledge of privacy. We describe initial investigations into children's understanding of privacy from a Contextual Integrity (CI) perspective by conducting semi-structured interviews. We share results -- that echo what others have shown -- that indicate children have limited knowledge and understanding of CI principles. We also share an initial exploration of utilizing participatory design theater as a possible educational mechanism to help children develop a stronger understanding of important privacy principles 
    more » « less
  6. null (Ed.)
    An increasing number of people are sharing information through text messages, emails, and social media without proper privacy checks. In many situations, this could lead to serious privacy threats. This paper presents a methodology for providing extra safety precautions without being intrusive to users. We have developed and evaluated a model to help users take control of their shared information by automatically identifying text (i.e., a sentence or a transcribed utterance) that might contain personal or private disclosures. We apply off-the-shelf natural language processing tools to derive linguistic features such as part-of-speech, syntactic dependencies, and entity relations. From these features, we model and train a multichannel convolutional neural network as a classifier to identify short texts that have personal, private disclosures. We show how our model can notify users if a piece of text discloses personal or private information, and evaluate our approach in a binary classification task with 93% accuracy on our own labeled dataset, and 86% on a dataset of ground truth. Unlike document classification tasks in the area of natural language processing, our framework is developed keeping the sentence level context into consideration. 
    more » « less
  7. null (Ed.)
    As our society has become more information oriented, each individual is expressed, defined, and impacted by information and information technology. While valuable, the current state-of-the-art mostly are designed to protect the enterprise/ organizational privacy requirements and leave the main actor, i.e., the user, un-involved or with the limited ability to have control over his/her information sharing practices. In order to overcome these limitations, algorithms and tools that provide a user-centric privacy management system to individuals with different privacy concerns are required to take into the consideration the dynamic nature of privacy policies which are constantly changing based on the information sharing context and environmental variables. This paper extends the concept of contextual integrity to provide mathematical models and algorithms that enables the creations and management of privacy norms for individual users. The extension includes the augmentation of environmental variables, i.e. time, date, etc. as part of the privacy norms, while introducing an abstraction and a partial relation over information attributes. Further, a formal verification technique is proposed to ensure privacy norms are enforced for each information sharing action. 
    more » « less
  8. Personalized systems increasingly employ Privacy Enhancing Technologies (PETs) to protect the identity of their users. In this paper, we are interested in whether the cost-benefit tradeoff — the underlying economics of the privacy calculus — is fairly distributed, or whether some groups of people experience a lower return on investment for their privacy decisions. 
    more » « less
  9. null (Ed.)
    In this position paper, we argue for applying recent research on ensuring sociotechnical systems are fair and non-discriminatory to the privacy protections those systems may provide. Privacy literature seldom considers whether a proposed privacy scheme protects all persons uniformly, irrespective of membership in protected classes or particular risk in the face of privacy failure. Just as algorithmic decision-making systems may have discriminatory outcomes even without explicit or deliberate discrimination, so also privacy regimes may disproportionately fail to protect vulnerable members of their target population, resulting in disparate impact with respect to the effectiveness of privacy protections.We propose a research agenda that will illuminate this issue, along with related issues in the intersection of fairness and privacy, and present case studies that show how the outcomes of this research may change existing privacy and fairness research. We believe it is important to ensure that technologies and policies intended to protect the users and subjects of information systems provide such protection in an equitable fashion. 
    more » « less
  10. null (Ed.)
    With the growth of Internet in many different aspects of life, users are required to share private information more than ever. Hence, users need a privacy management tool that can enforce complex and customized privacy policies. In this paper, we propose a privacy management system that not only allows users to define complex privacy policies for data sharing actions, but also monitors users' behavior and relationships to generate realistic policies. In addition, the proposed system utilizes formal modeling and model-checking approach to prove that information disclosures are valid and privacy policies are consistent with one another 
    more » « less