skip to main content


Search for: All records

Creators/Authors contains: "Wisniewski, Pamela J."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Adolescent online safety research has largely focused on designing interventions for teens, with few evaluations that provide effective online safety solutions. It is challenging to evaluate such solutions without simulating an environment that mimics teens online risks. To overcome this gap, we conducted focus groups with 14 teens to co-design realistic online risk scenarios and their associated user personas, which can be implemented for an ecologically valid evaluation of interventions. We found that teens considered the characteristics of the risky user to be important and designed personas to have traits that align with the risk type, were more believable and authentic, and attracted teens through materialistic content. Teens also redesigned the risky scenarios to be subtle in information breaching, harsher in cyberbullying, and convincing in tricking the teen. Overall, this work provides an in-depth understanding of the types of bad actors and risky scenarios teens design for realistic research experimentation. 
    more » « less
    Free, publicly-accessible full text available October 14, 2024
  2. As part of a Youth Advisory Board of teens (YAB), a longitudinal and interactive program to engage with teens for adolescent online safety research, we used an Asynchronous Remote Community (ARC) method with seven teens to explore their social media usage and perspectives on privacy on social media. There was a spectrum of privacy levels in our teen participants’ preferred social media platforms and preferences varied depending on their user goals such as content viewing and socializing. They recognized privacy risks they could encounter on social media, hence, actively used privacy features afforded by platforms to stay safe while meeting their goals. In addition, our teen participants designed solutions that can aid users to exercise more granular control over determining what information on their accounts is to be shared with which groups of users. Our findings highlight the need to ensure researchers and social media developers work with teens to provide teen-centric solutions for safer experiences on social media. 
    more » « less
    Free, publicly-accessible full text available October 14, 2024
  3. Artificial intelligence (AI) underpins virtually every experience that we have—from search and social media to generative AI and immersive social virtual reality (SVR). For Generation Z, there is no before AI. As adults, we must humble ourselves to the notion that AI is shaping youths’ world in ways that we don’t understand and we need to listen to them about their lived experiences. We invite researchers from academia and industry to participate in a workshop with youth activists to set the agenda for research into how AI-driven emerging technologies affect youth and how to address these challenges. This reflective workshop will amplify youth voices and empower youth and researchers to set an agenda. As part of the workshop, youth activists will participate in a panel and steer the conversation around the agenda for future research. All will participate in group research agenda setting activities to reflect on their experiences with AI technologies and consider ways to tackle these challenges. 
    more » « less
    Free, publicly-accessible full text available October 14, 2024
  4. Adolescent online safety researchers have emphasized the importance of moving beyond restrictive and privacy invasive approaches to online safety, towards resilience-based approaches for empowering teens to deal with online risks independently. Unfortunately, many of the existing online safety interventions are focused on parental mediation and not contextualized to teens' personal experiences online; thus, they do not effectively cater to the unique needs of teens. To better understand how we might design online safety interventions that help teens deal with online risks, as well as when and how to intervene, we must include teens as partners in the design process and equip them with the skills needed to contribute equally to the design process. As such, we conducted User Experience (UX) bootcamps with 21 teens (ages 13-17) to first teach them important UX design skills using industry standard tools, so they could create storyboards for unsafe online interactions commonly experienced by teens and high-fidelity, interactive prototypes for dealing with these situations. Based on their storyboards, teens often encountered information breaches and sexual risks with strangers, as well as cyberbullying from acquaintances or friends. While teens often blocked or reported strangers, they struggled with responding to risks from friends or acquaintances, seeking advice from others on the best action to take. Importantly, teens did not find any of the existing ways for responding to these risks to be effective in keeping them safe. When asked to create their own design-based interventions, teens frequently envisioned nudges that occurred in real-time. Interestingly, teens more often designed for risk prevention (rather than risk coping) by focusing on nudging the risk perpetrator (rather than the victim) to rethink their actions, block harmful actions from occurring, or penalizing perpetrators for inappropriate behavior to prevent it from happening again in the future. Teens also designed personalized sensitivity filters to provide teens the ability to manage content they wanted to see online. Some teens also designed personalized nudges, so that teens could receive intelligent, guided advice from the platform that would help them know how to handle online risks themselves without intervention from their parents. Our findings highlight how teens want to address online risks at the root by putting the onus of risk prevention on those who perpetrate them - rather than on the victim. Our work is the first to leverage co-design with teens to develop novel online safety interventions that advocate for a paradigm shift from youth risk protection to promoting good digital citizenship.

     
    more » « less
  5. We conducted 26 co-design interviews with 50 smarthome device owners to understand the perceived benefits, drawbacks, and design considerations for developing a smarthome system that facilitates co-monitoring with emergency contacts who live outside of one’s home. Participants felt that such a system would help ensure their personal safety, safeguard from material loss, and give them peace of mind by ensuring quick response and verifying potential threats. However, they also expressed concerns regarding privacy, overburdening others, and other potential threats, such as unauthorized access and security breaches. To alleviate these concerns, participants designed flexible and granular access control and fail-safe back-up features. Our study reveals why peer-based co-monitoring of smarthomes for emergencies may be beneficial but also difficult to implement. Based on the insights gained from our study, we provide recommendations for designing technologies that facilitate such co-monitoring while mitigating its risks. 
    more » « less
  6. We conducted a user study with 19 parent-teen dyads to understand the perceived benefits and drawbacks of using a mobile app that allows them to co-manage mobile privacy, safety, and security within their families. While the primary goal of the study was to understand the use case as it pertained to parents and teens, an emerging finding from our study was that participants found value in extending app use to other family members (siblings, cousins, and grandparents). Participants felt that it would help bring the necessary expertise into their immediate family network and help protect the older adults and children of the family from privacy and security risks. However, participants expressed that co-monitoring by extended family members might cause tensions in their families, creating interpersonal conflicts. To alleviate these concerns, participants suggested more control over the privacy features to facilitate sharing their installed apps with only trusted family members. 
    more » « less
  7. Although youth increasingly communicate with peers online, we know little about how private online channels play a role in providing a supportive environment for youth. To fill this gap, we asked youth to donate their Instagram Direct Messages and filtered them by the phrase “help me.” From this query, we analyzed 82 conversations comprised of 336,760 messages that 42 participants donated. These threads often began as casual conversations among friends or lovers they met offline or online. The conversations evolved into sharing negative experiences about everyday stress (e.g., school, dating) to severe mental health disclosures (e.g., suicide). Disclosures were usually reciprocated with relatable experiences and positive peer support. We also discovered unsupport as a theme, where conversation members denied giving support, a unique finding in the online social support literature. We discuss the role of social media-based private channels and their implications for design in supporting youth’s mental health. Content Warning: This paper includes sensitive topics, including self-harm and suicide ideation. Reader discretion is advised. 
    more » « less
  8. We collected Instagram data from 150 adolescents (ages 13-21) that included 15,547 private message conversations of which 326 conversations were flagged as sexually risky by participants. Based on this data, we leveraged a human-centered machine learning approach to create sexual risk detection classifiers for youth social media conversations. Our Convolutional Neural Network (CNN) and Random Forest models outperformed in identifying sexual risks at the conversation-level (AUC=0.88), and CNN outperformed at the message-level (AUC=0.85). We also trained classifiers to detect the severity risk level (i.e., safe, low, medium-high) of a given message with CNN outperforming other models (AUC=0.88). A feature analysis yielded deeper insights into patterns found within sexually safe versus unsafe conversations. We found that contextual features (e.g., age, gender, and relationship type) and Linguistic Inquiry and Word Count (LIWC) contributed the most for accurately detecting sexual conversations that made youth feel uncomfortable or unsafe. Our analysis provides insights into the important factors and contextual features that enhance automated detection of sexual risks within youths' private conversations. As such, we make valuable contributions to the computational risk detection and adolescent online safety literature through our human-centered approach of collecting and ground truth coding private social media conversations of youth for the purpose of risk classification. 
    more » « less
  9. Instagram, one of the most popular social media platforms among youth, has recently come under scrutiny for potentially being harmful to the safety and well-being of our younger generations. Automated approaches for risk detection may be one way to help mitigate some of these risks if such algorithms are both accurate and contextual to the types of online harms youth face on social media platforms. However, the imminent switch by Instagram to end-to-end encryption for private conversations will limit the type of data that will be available to the platform to detect and mitigate such risks. In this paper, we investigate which indicators are most helpful in automatically detecting risk in Instagram private conversations, with an eye on high-level metadata, which will still be available in the scenario of end-to-end encryption. Toward this end, we collected Instagram data from 172 youth (ages 13-21) and asked them to identify private message conversations that made them feel uncomfortable or unsafe. Our participants risk-flagged 28,725 conversations that contained 4,181,970 direct messages, including textual posts and images. Based on this rich and multimodal dataset, we tested multiple feature sets (metadata, linguistic cues, and image features) and trained classifiers to detect risky conversations. Overall, we found that the metadata features (e.g., conversation length, a proxy for participant engagement) were the best predictors of risky conversations. However, for distinguishing between risk types, the different linguistic and media cues were the best predictors. Based on our findings, we provide design implications for AI risk detection systems in the presence of end-to-end encryption. More broadly, our work contributes to the literature on adolescent online safety by moving toward more robust solutions for risk detection that directly takes into account the lived risk experiences of youth. 
    more » « less
  10. Social service providers play a vital role in the developmental outcomes of underprivileged youth as they transition into adulthood. Educators, mental health professionals, juvenile justice officers, and child welfare caseworkers often have first-hand knowledge of the trials uniquely faced by these vulnerable youth and are charged with mitigating harmful risks, such as mental health challenges, child abuse, drug use, and sex trafficking. Yet, less is known about whether or how social service providers assess and mitigate the online risk experiences of youth under their care. Therefore, as part of the National Science Foundation (NSF) I-Corps program, we conducted interviews with 37 social service providers (SSPs) who work with underprivileged youth to determine what (if any) online risks are most concerning to them given their role in youth protection, how they assess or become aware of these online risk experiences, and whether they see value in the possibility of using artificial intelligence (AI) as a potential solution for online risk detection. Overall, online sexual risks (e.g., sexual grooming and abuse) and cyberbullying were the most salient concern across all social service domains, especially when these experiences crossed the boundary between the digital and the physical worlds. Yet, SSPs had to rely heavily on youth self-reports to know whether and when online risks occurred, which required building a trusting relationship with youth; otherwise, SSPs became aware only after a formal investigation had been launched. Therefore, most SSPs found value in the potential for using AI as an early detection system and to monitor youth, but they were concerned that such a solution would not be feasible due to a lack of resources to adequately respond to online incidences, access to the necessary digital trace data (e.g., social media), context, and concerns about violating the trust relationships they built with youth. Thus, such automated risk detection systems should be designed and deployed with caution, as their implementation could cause youth to mistrust adults, thereby limiting the receipt of necessary guidance and support. We add to the bodies of research on adolescent online safety and the benefits and challenges of leveraging algorithmic systems in the public sector. 
    more » « less