skip to main content


Title: Learning and Preserving Relationship Privacy in Photo Sharing
In recent years, Online Social Networks (OSN) have become popular content-sharing environments. With the emergence of smartphones with high-quality cameras, people like to share photos of their life moments on OSNs. The photos, however, often contain private information that people do not intend to share with others (e.g., their sensitive relationship). Solely relying on OSN users to manually process photos to protect their relationship can be tedious and error-prone. Therefore, we designed a system to automatically discover sensitive relations in a photo to be shared online and preserve the relations by face blocking techniques. We first used the Decision Tree model to learn sensitive relations from the photos labeled private or public by OSN users. Then we defined a face blocking problem and developed a linear programming model to optimize the tradeoff between preserving relationship privacy and maintaining the photo utility. In this paper, we generated synthetic data and used it to evaluate our system performance in terms of privacy protection and photo utility loss.  more » « less
Award ID(s):
1712496
NSF-PAR ID:
10438705
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
EEE/ACM International Conference on Big Data Computing, Applications and Technologies
Page Range / eLocation ID:
170 to 173
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. ‘Interdependent’ privacy violations occur when users share private photos and information about other people in social media without permission. This research investigated user characteristics associated with interdependent privacy perceptions, by asking social media users to rate photo-based memes depicting strangers on the degree to which they were too private to share. Users also completed questionnaires measuring social media usage and personality. Separate groups rated the memes on shareability, valence, and entertainment value. Users were less likely to share memes that were rated as private, except when the meme was entertaining or when users exhibited dark triad characteristics. Users with dark triad characteristics demonstrated a heightened awareness of interdependent privacy and increased sharing of others’ photos. A model is introduced that highlights user types and characteristics that correspond to different privacy preferences: privacy preservers, ignorers, and violators. We discuss how interventions to support interdependent privacy must effectively influence diverse users. 
    more » « less
  2. With the rising popularity of photo sharing in online social media, interpersonal privacy violations, where one person violates the privacy of another, have become an increasing concern. Although applying image obfuscations can be a useful tool for improving privacy when sharing photos, prior studies have found these obfuscation techniques adversely affect viewers' satisfaction. On the other hand, ephemeral photos, popularized by apps such as Snapchat, allow viewers to see the entire photo, which then disappears shortly thereafter to protect privacy. However, people often use workarounds to save these photos before deletion. In this work, we study people's sharing preferences with two proposed 'temporal redactions', which combines ephemerality with redactions to allow viewers to see the entire image, yet make these images safe for longer storage through a gradual or delayed application of redaction on the sensitive portions of the photo. We conducted an online experiment (N=385) to study people's sharing behaviors in different contexts and under different levels of assurance provided by the viewer's platform (e.g., guaranteeing temporal redactions are applied through the use of 'trusted hardware'). Our findings suggest that the proposed temporal redaction mechanisms are often preferred over existing methods. On the other hand, more efforts are needed to convey the benefits of trusted hardware to users, as no significant differences were observed in attitudes towards 'trusted hardware' on viewers' devices. 
    more » « less
  3. We investigate the effects of perspective taking, privacy cues, and portrayal of photo subjects (i.e., photo valence) on decisions to share photos of people via social media. In an online experiment we queried 379 participants about 98 photos (that were previously rated for photo valence) in three conditions: (1) Baseline: participants judged their likelihood of sharing each photo; (2) Perspective-taking: participants judged their likelihood of sharing each photo when cued to imagine they are the person in the photo; and (3) Privacy: participants judged their likelihood to share after being cued to consider the privacy of the person in the photo. While participants across conditions indicated a lower likelihood of sharing photos that portrayed people negatively, they – surprisingly – reported a higher likelihood of sharing photos when primed to consider the privacy of the person in the photo. Frequent photo sharers on real-world social media platforms and people without strong personal privacy preferences were especially likely to want to share photos in the experiment, regardless of how the photo portrayed the subject. A follow-up study with 100 participants explaining their responses revealed that the Privacy condition led to a lack of concern with others’ privacy. These findings suggest that developing interventions for reducing photo sharing and protecting the privacy of others is a multivariate problem in which seemingly obvious solutions can sometimes go awry. 
    more » « less
  4. Interdependent privacy (IDP) violations among users occur at a massive scale on social media, as users share or re-share potentially sensitive photos and information about other people without permission. Given that IDP represents a collective moral concern, an ethics of care (or “care ethics”) can inform interventions to promote online privacy. Applied to cyber security and privacy, ethics of care theory puts human relationships at the center of moral problems, where caring-about supports conditions of caring-for and, in turn, protects interpersonal relationships. This position paper explores design implications of an ethics of care framework in the context of IDP preservation. First, we argue that care ethics highlights the need for a network of informed stakeholders involved in content moderation strategies that align with public values. Second, an ethics of care framework calls for psychosocial interventions at the user-level aimed toward promoting more responsible IDP decision-making among the general public. In conclusion, ethics of care has potential to provide coherence in understanding the people involved in IDP, the nature of IDP issues, and potential solutions, in turn, motivating new directions in IDP research. 
    more » « less
  5. Today, face editing is widely used to refine/alter photos in both professional and recreational settings. Yet it is also used to modify (and repost) existing online photos for cyberbullying. Our work considers an important open question: 'How can we support the collaborative use of face editing on social platforms while protecting against unacceptable edits and reposts by others?' This is challenging because, as our user study shows, users vary widely in their definition of what edits are (un)acceptable. Any global filter policy deployed by social platforms is unlikely to address the needs of all users, but hinders social interactions enabled by photo editing. Instead, we argue that face edit protection policies should be implemented by social platforms based on individual user preferences. When posting an original photo online, a user can choose to specify the types of face edits (dis)allowed on the photo. Social platforms use these per-photo edit policies to moderate future photo uploads, i.e., edited photos containing modifications that violate the original photo's policy are either blocked or shelved for user approval. Realizing this personalized protection, however, faces two immediate challenges: (1) how to accurately recognize specific modifications, if any, contained in a photo; and (2) how to associate an edited photo with its original photo (and thus the edit policy). We show that these challenges can be addressed by combining highly efficient hashing based image search and scalable semantic image comparison, and build a prototype protector (Alethia) covering nine edit types. Evaluations using IRB-approved user studies and data-driven experiments (on 839K face photos) show that Alethia accurately recognizes edited photos that violate user policies and induces a feeling of protection to study participants. This demonstrates the initial feasibility of personalized face edit protection. We also discuss current limitations and future directions to push the concept forward.

     
    more » « less