skip to main content


Title: Face-Off: Adversarial Face Obfuscation
Award ID(s):
2003129 1838733
NSF-PAR ID:
10216805
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Proceedings on Privacy Enhancing Technologies
ISSN:
2299-0984
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract  
    more » « less
  2. Abstract

    Although still‐face effects are well‐studied, little is known about the degree to which the Face‐to‐Face/Still‐Face (FFSF) is associated with the production of intense affective displays. Duchenne smiling expresses more intense positive affect than non‐Duchenne smiling, while Duchenne cry‐faces express more intense negative affect than non‐Duchenne cry‐faces. Forty 4‐month‐old infants and their mothers completed the FFSF, and key affect‐indexing facial Action Units (AUs) were coded by expert Facial Action Coding System coders for the first 30 s of each FFSF episode. Computer vision software, automated facial affect recognition (AFAR), identified AUs for the entire 2‐min episodes. Expert coding and AFAR produced similar infant and mother Duchenne and non‐Duchenne FFSF effects, highlighting the convergent validity of automated measurement. Substantive AFAR analyses indicated that both infant Duchenne and non‐Duchenne smiling declined from the FF to the SF, but only Duchenne smiling increased from the SF to the RE. In similar fashion, the magnitude of mother Duchenne smiling changes over the FFSF were 2–4 times greater than non‐Duchenne smiling changes. Duchenne expressions appear to be a sensitive index of intense infant and mother affective valence that are accessible to automated measurement and may be a target for future FFSF research.

     
    more » « less
  3. In this Lessons Learned paper, we explore the themes uncovered from a series of facilitated faculty discussions on moving their course back to face to face teaching after the switch to online. The Institute at Anonymous University administrates over 100 faculty whose primary department appointments and teaching assignments are in either engineering or education. Over the last two years, the Institute hosted numerous conversations for faculty members to share experiences, research, and assessments of teaching successes and concerns as they changed instructional modalities, both with the initial move online and the subsequent move back face to face. From these conversations, faculty agree that some things during the move to online instruction, such as office hours, video archives of lectures, and some activities in break-out rooms appear to enhance student learning. Yet data showed that students believed the online experience was less desirable than face to face courses. Now that we have had a near complete semester where most classes were required to be held in the face to face mode, we are hosting conversations with faculty to understand the changes they are now making to their teaching because of the experiences from online instruction. 
    more » « less
  4. Today, face editing is widely used to refine/alter photos in both professional and recreational settings. Yet it is also used to modify (and repost) existing online photos for cyberbullying. Our work considers an important open question: 'How can we support the collaborative use of face editing on social platforms while protecting against unacceptable edits and reposts by others?' This is challenging because, as our user study shows, users vary widely in their definition of what edits are (un)acceptable. Any global filter policy deployed by social platforms is unlikely to address the needs of all users, but hinders social interactions enabled by photo editing. Instead, we argue that face edit protection policies should be implemented by social platforms based on individual user preferences. When posting an original photo online, a user can choose to specify the types of face edits (dis)allowed on the photo. Social platforms use these per-photo edit policies to moderate future photo uploads, i.e., edited photos containing modifications that violate the original photo's policy are either blocked or shelved for user approval. Realizing this personalized protection, however, faces two immediate challenges: (1) how to accurately recognize specific modifications, if any, contained in a photo; and (2) how to associate an edited photo with its original photo (and thus the edit policy). We show that these challenges can be addressed by combining highly efficient hashing based image search and scalable semantic image comparison, and build a prototype protector (Alethia) covering nine edit types. Evaluations using IRB-approved user studies and data-driven experiments (on 839K face photos) show that Alethia accurately recognizes edited photos that violate user policies and induces a feeling of protection to study participants. This demonstrates the initial feasibility of personalized face edit protection. We also discuss current limitations and future directions to push the concept forward.

     
    more » « less