skip to main content


Title: Deploying Human-Centered Machine Learning to Improve Adolescent Online Sexual Risk Detection Algorithms
As adolescents' engagement increases online, it becomes more essential to provide a safe environment for them. Although some apps and systems are available for keeping teens safer online, these approaches and apps do not consider the needs of parents and teens. We would like to improve adolescent online sexual risk detection algorithms. In order to do so, I'll conduct three research studies for my dissertation: 1) Qualitative analysis on teens posts on an online peer support platform about online sexual risks in order to gain deep understanding of online sexual risks 2) Train a machine learning approach to detect sexual risks based on teens conversations with sex offenders 3) develop a machine learning algorithm for detecting online sexual risks specialized for adolescents.  more » « less
Award ID(s):
1827700
NSF-PAR ID:
10184749
Author(s) / Creator(s):
Date Published:
Journal Name:
Deploying Human-Centered Machine Learning to Improve Adolescent Online Sexual Risk Detection Algorithms
Page Range / eLocation ID:
157 to 161
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Online sexual risks pose a serious and frequent threat to adolescents’ online safety. While significant work is done within the HCI community to understand teens’ sexual experiences through public posts, we extend their research by qualitatively analyzing 156 private Instagram conversations flagged by 58 adolescents to understand the characteristics of sexual risks faced with strangers, acquaintances, and friends. We found that youth are often victimized by strangers through sexual solicitation/harassment as well as sexual spamming via text and visual media, which is often ignored by them. In contrast, adolescents’ played mixed roles with acquaintances, as they were often victims of sexual harassment, but sometimes engaged in sexting, or interacted by rejecting sexual requests from acquaintances. Lastly, adolescents were never recipients of sexual risks with their friends, as they mostly mutually participated in sexting or sexual spamming. Based on these results, we provide our insights and recommendations for future researchers. Trigger Warning: This paper contains explicit language and anonymized private sexual messages. Reader discretion advised. 
    more » « less
  2. Parental control applications are designed to help parents monitor their teens and protect them from online risks. Generally, parents are considered the primary stakeholders for these apps; therefore, the apps often emphasize increased parental control through restriction and monitoring. By taking a developmental perspective and a Value Sensitive Design approach, we explore the possibility of designing more youth-centric online safety features. We asked 39 undergraduate students in the United States to create design charrettes of parental control apps that would better represent teens as stakeholders. As emerging adults, students discussed the value tensions between teens and parents and designed features to reduce and balance these tensions. While they emphasized safety, the students also designed to improve parent-teen communication, teen autonomy and privacy, and parental support. Our research contributes to the adolescent online safety literature by presenting design ideas from emerging adults that depart from the traditional paradigm of parental control. We also make a pedagogical contribution by leveraging design charrettes as a classroom tool for engaging college students in the design of youth-centered apps. We discuss why features that support parent-teen cooperation, teen privacy, and autonomy may be more developmentally appropriate for adolescents than existing parental control app designs. 
    more » « less
  3. Adolescent online safety researchers have emphasized the importance of moving beyond restrictive and privacy invasive approaches to online safety, towards resilience-based approaches for empowering teens to deal with online risks independently. Unfortunately, many of the existing online safety interventions are focused on parental mediation and not contextualized to teens' personal experiences online; thus, they do not effectively cater to the unique needs of teens. To better understand how we might design online safety interventions that help teens deal with online risks, as well as when and how to intervene, we must include teens as partners in the design process and equip them with the skills needed to contribute equally to the design process. As such, we conducted User Experience (UX) bootcamps with 21 teens (ages 13-17) to first teach them important UX design skills using industry standard tools, so they could create storyboards for unsafe online interactions commonly experienced by teens and high-fidelity, interactive prototypes for dealing with these situations. Based on their storyboards, teens often encountered information breaches and sexual risks with strangers, as well as cyberbullying from acquaintances or friends. While teens often blocked or reported strangers, they struggled with responding to risks from friends or acquaintances, seeking advice from others on the best action to take. Importantly, teens did not find any of the existing ways for responding to these risks to be effective in keeping them safe. When asked to create their own design-based interventions, teens frequently envisioned nudges that occurred in real-time. Interestingly, teens more often designed for risk prevention (rather than risk coping) by focusing on nudging the risk perpetrator (rather than the victim) to rethink their actions, block harmful actions from occurring, or penalizing perpetrators for inappropriate behavior to prevent it from happening again in the future. Teens also designed personalized sensitivity filters to provide teens the ability to manage content they wanted to see online. Some teens also designed personalized nudges, so that teens could receive intelligent, guided advice from the platform that would help them know how to handle online risks themselves without intervention from their parents. Our findings highlight how teens want to address online risks at the root by putting the onus of risk prevention on those who perpetrate them - rather than on the victim. Our work is the first to leverage co-design with teens to develop novel online safety interventions that advocate for a paradigm shift from youth risk protection to promoting good digital citizenship.

     
    more » « less
  4. null (Ed.)
    Traditional parental control applications designed to protect children and teens from online risks do so through parental restrictions and privacy-invasive monitoring. We propose a new approach to adolescent online safety that aims to strike a balance between a teen’s privacy and their online safety through active communication and fostering trust between parents and children. We designed and developed an Android “app” called Circle of Trust and conducted a mixed methods user study of 17 parent-child pairs to understand their perceptions about the app. Using a within-subjects experimental design, we found that parents and children significantly preferred our new app design over existing parental control apps in terms of perceived usefulness, ease of use, and behavioral intent to use. By applying a lens of Value Sensitive Design to our interview data, we uncovered that parents and children who valued privacy, trust, freedom, and balance of power preferred our app over traditional apps. However, those who valued transparency and control preferred the status quo. Overall, we found that our app was better suited for teens than for younger children. 
    more » « less
  5. We collected Instagram data from 150 adolescents (ages 13-21) that included 15,547 private message conversations of which 326 conversations were flagged as sexually risky by participants. Based on this data, we leveraged a human-centered machine learning approach to create sexual risk detection classifiers for youth social media conversations. Our Convolutional Neural Network (CNN) and Random Forest models outperformed in identifying sexual risks at the conversation-level (AUC=0.88), and CNN outperformed at the message-level (AUC=0.85). We also trained classifiers to detect the severity risk level (i.e., safe, low, medium-high) of a given message with CNN outperforming other models (AUC=0.88). A feature analysis yielded deeper insights into patterns found within sexually safe versus unsafe conversations. We found that contextual features (e.g., age, gender, and relationship type) and Linguistic Inquiry and Word Count (LIWC) contributed the most for accurately detecting sexual conversations that made youth feel uncomfortable or unsafe. Our analysis provides insights into the important factors and contextual features that enhance automated detection of sexual risks within youths' private conversations. As such, we make valuable contributions to the computational risk detection and adolescent online safety literature through our human-centered approach of collecting and ground truth coding private social media conversations of youth for the purpose of risk classification. 
    more » « less