skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Perceptions of Retrospective Edits, Changes, and Deletion on Social Media
Many social media sites permit users to delete, edit, anonymize, or otherwise modify past posts. These mechanisms enable users to protect their privacy, but also to essentially change the past. We investigate perceptions of the necessity and acceptability of these mechanisms. Drawing on boundary-regulation theories of privacy, we first identify how users who reshared or responded to a post could be impacted by its retrospective modification. These mechanisms can cause boundary turbulence by recontextualizing past content and limiting accountability. In contrast, not permitting modification can lessen privacy and perpetuate harms of regrettable content. To understand how users perceive these mechanisms, we conducted 15 semi-structured interviews. Participants deemed retrospective modification crucial for fixing past mistakes. Nonetheless, they worried about the potential for deception through selective changes or removal. Participants were aware retrospective modification impacts others, yet felt these impacts could be minimized through context-aware usage of markers and proactive notifications.  more » « less
Award ID(s):
1801663
PAR ID:
10227477
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the Fifteenth International AAAI Conference on Web and Social Media (ICWSM '21)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Online archives, including social media and cloud storage, store vast troves of personal data accumulated over many years. Recent work suggests that users feel the need to retrospectively manage security and privacy for this huge volume of content. However, few mechanisms and systems help these users complete this daunting task. To that end, we propose the creation of usable retrospective data management mechanisms, outlining our vision for a possible architecture to address this challenge. 
    more » « less
  2. Furnell, Steven (Ed.)
    A huge amount of personal and sensitive data is shared on Facebook, which makes it a prime target for attackers. Adversaries can exploit third-party applications connected to a user’s Facebook profile (i.e., Facebook apps) to gain access to this personal information. Users’ lack of knowledge and the varying privacy policies of these apps make them further vulnerable to information leakage. However, little has been done to identify mismatches between users’ perceptions and the privacy policies of Facebook apps. We address this challenge in our work. We conducted a lab study with 31 participants, where we received data on how they share information in Facebook, their Facebook-related security and privacy practices, and their perceptions on the privacy aspects of 65 frequently-used Facebook apps in terms of data collection, sharing, and deletion. We then compared participants’ perceptions with the privacy policy of each reported app. Participants also reported their expectations about the types of information that should not be collected or shared by any Facebook app. Our analysis reveals significant mismatches between users’ privacy perceptions and reality (i.e., privacy policies of Facebook apps), where we identified over-optimism not only in users’ perceptions of information collection, but also on their self-efficacy in protecting their information in Facebook despite experiencing negative incidents in the past. To the best of our knowledge, this is the first study on the gap between users’ privacy perceptions around Facebook apps and the reality. The findings from this study offer directions for future research to address that gap through designing usable, effective, and personalized privacy notices to help users to make informed decisions about using Facebook apps. 
    more » « less
  3. Mobile Augmented Reality (MAR) is a portable, powerful, and suitable technology that integrates digital content, e.g., 3D virtual objects, into the physical world, which not only has been implemented for multiple intents such as shopping, entertainment, gaming, etc., but it is also expected to grow at a tremendous rate in the upcoming years. Unfortunately, the applications that implement MAR, hereby referred to as MAR-Apps, bear security issues, which have been imaged in worldwide incidents such as robberies, which has led authorities to ban MAR-Apps at specific locations. Existing problems with MAR-Apps can be classified into three categories: first, Space Invasion, which implies the intrusive modification through MAR of sensitive spaces, e.g., hospitals, memorials, etc. Second, Space Affectation, which involves the degradation of users' experience via interaction with undesirable MAR or malicious entities. Finally, MAR-Apps mishandling sensitive data leads to Privacy Leaks. To alleviate these concerns, we present an approach for Policy-Governed MAR-Apps, which allows end-users to fully control under what circumstances, e.g., their presence inside a given sensitive space, digital content may be displayed by MAR-Apps. Through SpaceMediator, a proof-of-concept MAR-App that imitates the well-known and successful MAR-App Pokemon GO, we evaluated our approach through a user study with 40 participants, who recognized and prevented the issues just described with success rates as high as 92.50%. Furthermore, there is an enriched interest in Policy-Governed MAR-Apps as 87.50% of participants agreed with it, and 82.50% would use it to implement content-based restrictions in MAR-Apps These promising results encourage the adoption of our solution in future MAR-Apps. 
    more » « less
  4. How social media platforms could fairly conduct content moderation is gaining attention from society at large. Researchers from HCI and CSCW have investigated whether certain factors could affect how users perceive moderation decisions as fair or unfair. However, little attention has been paid to unpacking or elaborating on the formation processes of users' perceived (un)fairness from their moderation experiences, especially users who monetize their content. By interviewing 21 for-profit YouTubers (i.e., video content creators), we found three primary ways through which participants assess moderation fairness, including equality across their peers, consistency across moderation decisions and policies, and their voice in algorithmic visibility decision-making processes. Building upon the findings, we discuss how our participants' fairness perceptions demonstrate a multi-dimensional notion of moderation fairness and how YouTube implements an algorithmic assemblage to moderate YouTubers. We derive translatable design considerations for a fairer moderation system on platforms affording creator monetization. 
    more » « less
  5. People who are blind share their images and videos with companies that provide visual assistance technologies (VATs) to gain access to information about their surroundings. A challenge is that people who are blind cannot independently validate the content of the images and videos before they share them, and their visual data commonly contains private content. We examine privacy concerns for blind people who share personal visual data with VAT companies that provide descriptions authored by humans or artifcial intelligence (AI) . We frst interviewed 18 people who are blind about their perceptions of privacy when using both types of VATs. Then we asked the participants to rate 21 types of image content according to their level of privacy concern if the information was shared knowingly versus unknowingly with human- or AI-powered VATs. Finally, we analyzed what information VAT companies communicate to users about their collection and processing of users’ personal visual data through their privacy policies. Our fndings have implications for the development of VATs that safeguard blind users’ visual privacy, and our methods may be useful for other camera-based technology companies and their users. 
    more » « less