skip to main content

Search for: All records

Creators/Authors contains: "Wisniewski, Pamela J."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Viral social media challenges have erupted across multiple social media platforms. While social media users participate in prosocial challenges designed to support good causes, like the Ice Bucket Challenge, some challenges (e.g., Cinnamon Challenge) can also potentially be dangerous. To understand the influential factors, experiences, and reflections of young adults who participated in a viral social media challenge in the past, we conducted interviews with 30 college students (ages 18-27). We applied behavioral contagion theory as a qualitative lens to understand whether this theory could help explain the factors that contributed to their participation. We found that behavior contagion theory was useful but not fully able to explain how and why young social media users engaged in viral challenges. Thematic analyses uncovered that overt social influence and intrinsic factors (i.e., social pressure, entertainment value, and attention-seeking) also played a key role in challenge participation. Additionally, we identified divergent patterns between prosocial and potentially risky social media challenges. Those who participated in prosocial challenges appeared to be more socially motivated as they saw more similarities between themselves and the individuals that they observed performing the challenges and were more likely to be directly encouraged by their friends to participate. In contrast,more »those who performed potentially risky challenges often did not see similarities with other challenge participants, nor did they receive direct encouragement from peers; yet, half of these participants said they would not have engaged in the challenge had they been more aware of the potential for physical harm. We consider the benefits and risks that viral social media challenges present for young adults with the intent of optimizing these interactions by mitigating risks, rather than discouraging them altogether.« less
    Free, publicly-accessible full text available April 30, 2023
  2. We conducted a user study with 380 Android users, profiling them according to two key privacy behaviors: the number of apps installed, and the Dangerous permissions granted to those apps. We identified four unique privacy profiles: 1) Privacy Balancers (49.74% of participants), 2) Permission Limiters (28.68%), 3) App Limiters (14.74%), and 4) the Privacy Unconcerned (6.84%). App and Permission Limiters were significantly more concerned about perceived surveillance than Privacy Balancers and the Privacy Unconcerned. App Limiters had the lowest number of apps installed on their devices with the lowest intention of using apps and sharing information with them, compared to Permission Limiters who had the highest number of apps installed and reported higher intention to share information with apps. The four profiles reflect the differing privacy management strategies, perceptions, and intentions of Android users that go beyond the binary decision to share or withhold information via mobile apps.
    Free, publicly-accessible full text available April 29, 2023
  3. Our research aims to highlight and alleviate the complex tensions around online safety, privacy, and smartphone usage in families so that parents and teens can work together to better manage mobile privacy and security-related risks. We developed a mobile application ("app") for Community Oversight of Privacy and Security ("CO-oPS") and had parents and teens assess whether it would be applicable for use with their families. CO-oPS is an Android app that allows a group of users to co-monitor the apps installed on one another's devices and the privacy permissions granted to those apps. We conducted a study with 19 parent-teen (ages 13-17) pairs to understand how they currently managed mobile safety and app privacy within their family and then had them install, use, and evaluate the CO-oPS app. We found that both parents and teens gave little consideration to online safety and privacy before installing new apps or granting privacy permissions. When using CO-oPS, participants liked how the app increased transparency into one another's devices in a way that facilitated communication, but were less inclined to use features for in-app messaging or to hide apps from one another. Key themes related to power imbalances between parents and teens surfaced thatmore »made co-management challenging. Parents were more open to collaborative oversight than teens, who felt that it was not their place to monitor their parents, even though both often believed parents lacked the technological expertise to monitor themselves. Our study sheds light on why collaborative practices for managing online safety and privacy within families may be beneficial but also quite difficult to implement in practice. We provide recommendations for overcoming these challenges based on the insights gained from our study.« less
    Free, publicly-accessible full text available March 30, 2023
  4. In this work, we present a case study on an Instagram Data Donation (IGDD) project, which is a user study and web-based platform for youth (ages 13-21) to donate and annotate their Instagram data with the goal of improving adolescent online safety. We employed human-centered design principles to create an ecologically valid dataset that will be utilized to provide insights from teens’ private social media interactions and train machine learning models to detect online risks. Our work provides practical insights and implications for Human-Computer Interaction (HCI) researchers that collect and study social media data to address sensitive problems relating to societal good.
    Free, publicly-accessible full text available April 27, 2023
  5. We collected Instagram Direct Messages (DMs) from 100 adolescents and young adults (ages 13-21) who then flagged their own conversations as safe or unsafe. We performed a mixed-method analysis of the media files shared privately in these conversations to gain human-centered insights into the risky interactions experienced by youth. Unsafe conversations ranged from unwanted sexual solicitations to mental health related concerns, and images shared in unsafe conversations tended to be of people and convey negative emotions, while those shared in regular conversations more often conveyed positive emotions and contained objects. Further, unsafe conversations were significantly shorter, suggesting that youth disengaged when they felt unsafe. Our work uncovers salient characteristics of safe and unsafe media shared in private conversations and provides the foundation to develop automated systems for online risk detection and mitigation.
    Free, publicly-accessible full text available April 27, 2023
  6. Free, publicly-accessible full text available March 1, 2023
  7. Cyberbullying is a growing problem across social media platforms, inflicting short and long-lasting effects on victims. To mitigate this problem, research has looked into building automated systems, powered by machine learning, to detect cyberbullying incidents, or the involved actors like victims and perpetrators. In the past, systematic reviews have examined the approaches within this growing body of work, but with a focus on the computational aspects of the technical innovation, feature engineering, or performance optimization, without centering around the roles, beliefs, desires, or expectations of humans. In this paper, we present a human-centered systematic literature review of the past 10 years of research on automated cyberbullying detection. We analyzed 56 papers based on a three-prong human-centeredness algorithm design framework - spanning theoretical, participatory, and speculative design. We found that the past literature fell short of incorporating human-centeredness across multiple aspects, ranging from defining cyberbullying, establishing the ground truth in data annotation, evaluating the performance of the detection models, to speculating the usage and users of the models, including potential harms and negative consequences. Given the sensitivities of the cyberbullying experience and the deep ramifications cyberbullying incidents bear on the involved actors, we discuss takeaways on how incorporating human-centeredness in futuremore »research can aid with developing detection systems that are more practical, useful, and tuned to the diverse needs and contexts of the stakeholders.« less
  8. Managing digital privacy and security is often a collaborative process, where groups of individuals work together to share information and give one another advice. Yet, this collaborative process is not always reciprocal or equally shared. In many cases, individuals with more expertise help others without receiving help in return. Therefore, we studied the phenomenon of "Tech Caregiving" by surveying 20 groups (112 individuals) comprised of friends, family members, and/or co-workers who identified at least one member of their group as a someone who provides informal technical support to the people they know. We found that tech caregivers reported significantly higher levels of power use and self-efficacy for digital privacy and security, compared to tech caregivees. However, caregivers and caregivees did not differ based on their self-reportedcommunity collective-efficacy for collaboratively managing privacy and security together as a group. This finding demonstrates the importance of tech caregiving and community belonging in building community collective efficacy for digital privacy and security. We also found that caregivers and caregivees most often communicated via text message or phone when coordinating support, which was most frequently needed when troubleshooting or setting up new devices. Meanwhile, discussions specific to privacy and security represented only a small fractionmore »of the issues for which participants gave or received tech care. Thus, we conclude that educating tech caregivers on how to provide privacy and security-focused support, as well as designing technologies that facilitate such support, has the potential to create positive networks effects towards the collective management of digital privacy and security.« less
  9. The goal of this one-day workshop is to build an active community of researchers, practitioners, and policy-makers who are jointly committed to leveraging human-centered artificial intelligence (HCAI) to make the internet a safer place for youth. This community will be founded on the principles of open innovation and human dignity to address some of the most salient safety issues of modern-day internet, including online harassment, sexual solicitation, and the mental health of vulnerable internet users, particularly adolescents and young adults. We will partner with Mozilla Research Foundation to launch a new open project named “,” which will serve as a platform for code library, research, and data contributions that support the mission of internet safety. During the workshop, we will discuss: 1) the types of contributions and technical standards needed to advance the state-of-the art in online risk detection, 2) the practical, legal, and ethical challenges that we will face, and 3) ways in which we can overcome these challenges through the use of HCAI to create a sustainable community. An end goal of creating the MOSafely community is to offer evidence-based, customizable, robust, and low-cost technologies that are accessible to the public for youth protection.