Online sexual risks pose a serious and frequent threat to adolescents’ online safety. While significant work is done within the HCI community to understand teens’ sexual experiences through public posts, we extend their research by qualitatively analyzing 156 private Instagram conversations flagged by 58 adolescents to understand the characteristics of sexual risks faced with strangers, acquaintances, and friends. We found that youth are often victimized by strangers through sexual solicitation/harassment as well as sexual spamming via text and visual media, which is often ignored by them. In contrast, adolescents’ played mixed roles with acquaintances, as they were often victims of sexual harassment, but sometimes engaged in sexting, or interacted by rejecting sexual requests from acquaintances. Lastly, adolescents were never recipients of sexual risks with their friends, as they mostly mutually participated in sexting or sexual spamming. Based on these results, we provide our insights and recommendations for future researchers. Trigger Warning: This paper contains explicit language and anonymized private sexual messages. Reader discretion advised.
more »
« less
Deploying Human-Centered Machine Learning to Improve Adolescent Online Sexual Risk Detection Algorithms
As adolescents' engagement increases online, it becomes more essential to provide a safe environment for them. Although some apps and systems are available for keeping teens safer online, these approaches and apps do not consider the needs of parents and teens. We would like to improve adolescent online sexual risk detection algorithms. In order to do so, I'll conduct three research studies for my dissertation: 1) Qualitative analysis on teens posts on an online peer support platform about online sexual risks in order to gain deep understanding of online sexual risks 2) Train a machine learning approach to detect sexual risks based on teens conversations with sex offenders 3) develop a machine learning algorithm for detecting online sexual risks specialized for adolescents.
more »
« less
- Award ID(s):
- 1827700
- PAR ID:
- 10184749
- Date Published:
- Journal Name:
- Deploying Human-Centered Machine Learning to Improve Adolescent Online Sexual Risk Detection Algorithms
- Page Range / eLocation ID:
- 157 to 161
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Parental control applications are designed to help parents monitor their teens and protect them from online risks. Generally, parents are considered the primary stakeholders for these apps; therefore, the apps often emphasize increased parental control through restriction and monitoring. By taking a developmental perspective and a Value Sensitive Design approach, we explore the possibility of designing more youth-centric online safety features. We asked 39 undergraduate students in the United States to create design charrettes of parental control apps that would better represent teens as stakeholders. As emerging adults, students discussed the value tensions between teens and parents and designed features to reduce and balance these tensions. While they emphasized safety, the students also designed to improve parent-teen communication, teen autonomy and privacy, and parental support. Our research contributes to the adolescent online safety literature by presenting design ideas from emerging adults that depart from the traditional paradigm of parental control. We also make a pedagogical contribution by leveraging design charrettes as a classroom tool for engaging college students in the design of youth-centered apps. We discuss why features that support parent-teen cooperation, teen privacy, and autonomy may be more developmentally appropriate for adolescents than existing parental control app designs.more » « less
-
Our limited knowledge about what teens do online and inability to protect them from harm evokes a sense of fear that makes us prone to “risk discourse.” However, this mindset overshadows the potential benefits of youth engaging online and constrains our ability to design online safety mechanisms that are developmentally appropriate for empowering adolescents to become resilient to risks. Our goal in attending this workshop is to find actionable ways to incorporate new asset-based practices that prioritize teens’ strengths and capacities to improve adolescent online safety.more » « less
-
Future online safety technologies should consider the privacy needs of adolescents (ages 13-17) and support their ability to self-regulate their online behaviors and navigate online risks. To do this, adolescent online safety researchers and practitioners must shift towards solutions that are more teen-centric by designing privacy-preserving online safety solutions for teens. In this paper, we discuss privacy challenges we have encountered in conducting adolescent online safety research. We discuss privacy concerns of teens in regard to sharing their private social media data with researchers and potentially taking part in a user study where they share some of this information with their parents. Our research emphasizes a need for more privacy-preserving interventions for teens.more » « less
-
Adolescent online safety researchers have emphasized the importance of moving beyond restrictive and privacy invasive approaches to online safety, towards resilience-based approaches for empowering teens to deal with online risks independently. Unfortunately, many of the existing online safety interventions are focused on parental mediation and not contextualized to teens' personal experiences online; thus, they do not effectively cater to the unique needs of teens. To better understand how we might design online safety interventions that help teens deal with online risks, as well as when and how to intervene, we must include teens as partners in the design process and equip them with the skills needed to contribute equally to the design process. As such, we conducted User Experience (UX) bootcamps with 21 teens (ages 13-17) to first teach them important UX design skills using industry standard tools, so they could create storyboards for unsafe online interactions commonly experienced by teens and high-fidelity, interactive prototypes for dealing with these situations. Based on their storyboards, teens often encountered information breaches and sexual risks with strangers, as well as cyberbullying from acquaintances or friends. While teens often blocked or reported strangers, they struggled with responding to risks from friends or acquaintances, seeking advice from others on the best action to take. Importantly, teens did not find any of the existing ways for responding to these risks to be effective in keeping them safe. When asked to create their own design-based interventions, teens frequently envisioned nudges that occurred in real-time. Interestingly, teens more often designed for risk prevention (rather than risk coping) by focusing on nudging the risk perpetrator (rather than the victim) to rethink their actions, block harmful actions from occurring, or penalizing perpetrators for inappropriate behavior to prevent it from happening again in the future. Teens also designed personalized sensitivity filters to provide teens the ability to manage content they wanted to see online. Some teens also designed personalized nudges, so that teens could receive intelligent, guided advice from the platform that would help them know how to handle online risks themselves without intervention from their parents. Our findings highlight how teens want to address online risks at the root by putting the onus of risk prevention on those who perpetrate them - rather than on the victim. Our work is the first to leverage co-design with teens to develop novel online safety interventions that advocate for a paradigm shift from youth risk protection to promoting good digital citizenship.more » « less
An official website of the United States government

