skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


Title: How Language Formality in Security and Privacy Interfaces Impacts Intended Compliance
Strong end-user security practices benefit both the user and hosting platform, but it is not well understood how companies communicate with their users to encourage these practices. This paper explores whether web companies and their platforms use different levels of language formality in these communications and tests the hypothesis that higher language formality leads to users’ increased intention to comply. We contribute a dataset and systematic analysis of 1,817 English language strings in web security and privacy interfaces across 13 web platforms, showing strong variations in language. An online study with 512 participants further demonstrated that people perceive differences in the language formality across platforms and that a higher language formality is associated with higher self-reported intention to comply. Our findings suggest that formality can be an important factor in designing effective security and privacy prompts. We discuss implications of these results, including how to balance formality with platform language style. In addition to being the first piece of work to analyze language formality in user security, these findings provide valuable insights into how platforms can best communicate with users about account security.  more » « less
Award ID(s):
2006104
PAR ID:
10423708
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Human factors in computing systems
ISSN:
1062-9432
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Privacy scholarship has shown how norms of appropriate information flow and information regulatory processes vary according to environment, which change as the environment changes, including through the introduction of new technologies. This paper describes findings from a qualitative research study that examines practices and perceptions of privacy in Cambodia as the population rapidly moves into an online environment (specifically Facebook, the most popular Internet tool in Cambodia today). We empirically demonstrate how the concept of privacy differs across cultures and show how the Facebook platform, as it becomes popular worldwide, catalyzes change in norms of information regulation. We discuss how the localization of transnational technology platforms provides a key site in which to investigate changing cultural ideas about privacy, and to discover misalignments between different expectations for information flow. Finally, we explore ways that insufficient localization effort by transnational technology companies puts some of the most marginalized users at disproportionate information disclosure risk when using new Internet tools, and offer some pragmatic suggestions for how such companies could improve privacy tools for users who are far -geographically or culturally - from where the tools are designed. 
    more » « less
  2. User reporting is an essential component of content moderation on many online platforms--in particular, on end-to-end encrypted (E2EE) messaging platforms where platform operators cannot proactively inspect message contents. However, users' privacy concerns when considering reporting may impede the effectiveness of this strategy in regulating online harassment. In this paper, we conduct interviews with 16 users of E2EE platforms to understand users' mental models of how reporting works and their resultant privacy concerns and considerations surrounding reporting. We find that users expect platforms to store rich longitudinal reporting datasets, recognizing both their promise for better abuse mitigation and the privacy risk that platforms may exploit or fail to protect them. We also find that users have preconceptions about the respective capabilities and risks of moderators at the platform versus community level--for instance, users trust platform moderators more to not abuse their power but think community moderators have more time to attend to reports. These considerations, along with perceived effectiveness of reporting and how to provide sufficient evidence while maintaining privacy, shape how users decide whether, to whom, and how much to report. We conclude with design implications for a more privacy-preserving reporting system on E2EE messaging platforms. 
    more » « less
  3. Mobile and web apps are increasingly relying on the data generated or provided by users such as from their uploaded documents and images. Unfortunately, those apps may raise significant user privacy concerns. Specifically, to train or adapt their models for accurately processing huge amounts of data continuously collected from millions of app users, app or service providers have widely adopted the approach of crowdsourcing for recruiting crowd workers to manually annotate or transcribe the sampled ever-changing user data. However, when users' data are uploaded through apps and then become widely accessible to hundreds of thousands of anonymous crowd workers, many human-in-the-loop related privacy questions arise concerning both the app user community and the crowd worker community. In this paper, we propose to investigate the privacy risks brought by this significant trend of large-scale crowd-powered processing of app users' data generated in their daily activities. We consider the representative case of receipt scanning apps that have millions of users, and focus on the corresponding receipt transcription tasks that appear popularly on crowdsourcing platforms. We design and conduct an app user survey study (n=108) to explore how app users perceive privacy in the context of using receipt scanning apps. We also design and conduct a crowd worker survey study (n=102) to explore crowd workers' experiences on receipt and other types of transcription tasks as well as their attitudes towards such tasks. Overall, we found that most app users and crowd workers expressed strong concerns about the potential privacy risks to receipt owners, and they also had a very high level of agreement with the need for protecting receipt owners' privacy. Our work provides insights on app users' potential privacy risks in crowdsourcing, and highlights the need and challenges for protecting third party users' privacy on crowdsourcing platforms. We have responsibly disclosed our findings to the related crowdsourcing platform and app providers.

     
    more » « less
  4. Social media companies wield power over their users through design, policy, and through their participation in public discourse. We set out to understand how companies leverage public relations to influence expectations of privacy and privacy-related norms. To interrogate the discourse productions of companies in relation to privacy, we examine the blogs associated with three major social media platforms: Facebook, Instagram (both owned by Facebook Inc.), and Snapchat. We analyze privacy-related posts using critical discourse analysis to demonstrate how these powerful entities construct narratives about users and their privacy expectations. We find that each of these platforms often make use of discourse about "vulnerable" identities to invoke relations of power, while at the same time, advancing interpretations and values that favor data capitalism. Finally, we discuss how these public narratives might influence the construction of users' own interpretations of appropriate privacy norms and conceptions of self. We contend that expectations of privacy and social norms are not simply artifacts of users' own needs and desires, but co-constructions that reflect the influence of social media companies themselves. 
    more » « less
  5. null (Ed.)
    Content moderation is a critical service performed by a variety of people on social media, protecting users from offensive or harmful content by reviewing and removing either the content or the perpetrator. These moderators fall into one of two categories: employees or volunteers. Prior research has suggested that there are differences in the effectiveness of these two types of moderators, with the more transparent user-based moderation being useful for educating users. However, direct comparisons between commercially-moderated and user-moderated platforms are rare, and apart from the difference in transparency, we still know little about what other disparities in user experience these two moderator types may create. To explore this, we conducted cross-platform surveys of over 900 users of commercially-moderated (Facebook, Instagram, Twitter, and YouTube) and user-moderated (Reddit and Twitch) social media platforms. Our results indicated that although user-moderated platforms did seem to be more transparent than commercially-moderated ones, this did not lead to user-moderated platforms being perceived as less toxic. In addition, commercially-moderated platform users want companies to take more responsibility for content moderation than they currently do, while user-moderated platform users want designated moderators and those who post on the site to take more responsibility. Across platforms, users seem to feel powerless and want to be taken care of when it comes to content moderation as opposed to engaging themselves. 
    more » « less