Current youth online safety and risk detection solutions are mostly geared toward parental control. As HCI researchers, we acknowledge the importance of leveraging a youth-centered approach when building Artificial Intelligence (AI) tools for adolescents online safety. Therefore, we built the MOSafely, Is that ‘Sus’ (youth slang for suspicious)? a web-based risk detection assessment dashboard for youth (ages 13-21) to assess the AI risks identified within their online interactions (Instagram and Twitter Private conversations). This demonstration will showcase our novel system that embedded risk detection algorithms for youth evaluations and adopted the human–in–the loop approach for using youth evaluations to enhance the quality of machine learning models. 
                        more » 
                        « less   
                    
                            
                            A Stakeholders’ Analysis of the Sociotechnical Approaches for Protecting Youth Online
                        
                    
    
            Feasible and developmentally appropriate sociotechnical approaches for protecting youth from online risks have become a paramount concern among human-computer interaction research communities. Therefore, we conducted 38 interviews with entrepreneurs, IT professionals, clinicians, educators, and researchers who currently work in the space of youth online safety to understand the different sociotechnical approaches they proposed to keep youth safe online, while overcoming key challenges associated with these approaches. We identified three approaches taken among these stakeholders, which included 1) leveraging artificial intelligence (AI)/machine learning to detect risks, 2) building security/safety tools, and 3) developing new forms of parental control software. The trade-offs between privacy and protection, as well as other tensions among different stakeholders (e.g., tensions toward the big-tech companies) arose as major challenges, followed by the subjective nature of risk, lack of necessary but proprietary data, and costs to develop these technical solutions. To overcome the challenges, solutions such as building centralized and multi-disciplinary collaborations, creating sustainable business plans, prioritizing human-centered approaches, and leveraging state-of-art AI were suggested. Our contribution to the body of literature is providing evidence-based implications for the design of sociotechnical solutions to keep youth safe online. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2329976
- PAR ID:
- 10564917
- Publisher / Repository:
- Future Advances in Information and Communication
- Date Published:
- ISSN:
- 978-3-031-54053-0_40
- ISBN:
- 978-3-031-54053-0
- Format(s):
- Medium: X
- Location:
- Online
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Parental control applications are designed to help parents monitor their teens and protect them from online risks. Generally, parents are considered the primary stakeholders for these apps; therefore, the apps often emphasize increased parental control through restriction and monitoring. By taking a developmental perspective and a Value Sensitive Design approach, we explore the possibility of designing more youth-centric online safety features. We asked 39 undergraduate students in the United States to create design charrettes of parental control apps that would better represent teens as stakeholders. As emerging adults, students discussed the value tensions between teens and parents and designed features to reduce and balance these tensions. While they emphasized safety, the students also designed to improve parent-teen communication, teen autonomy and privacy, and parental support. Our research contributes to the adolescent online safety literature by presenting design ideas from emerging adults that depart from the traditional paradigm of parental control. We also make a pedagogical contribution by leveraging design charrettes as a classroom tool for engaging college students in the design of youth-centered apps. We discuss why features that support parent-teen cooperation, teen privacy, and autonomy may be more developmentally appropriate for adolescents than existing parental control app designs.more » « less
- 
            Social service providers play a vital role in the developmental outcomes of underprivileged youth as they transition into adulthood. Educators, mental health professionals, juvenile justice officers, and child welfare caseworkers often have first-hand knowledge of the trials uniquely faced by these vulnerable youth and are charged with mitigating harmful risks, such as mental health challenges, child abuse, drug use, and sex trafficking. Yet, less is known about whether or how social service providers assess and mitigate the online risk experiences of youth under their care. Therefore, as part of the National Science Foundation (NSF) I-Corps program, we conducted interviews with 37 social service providers (SSPs) who work with underprivileged youth to determine what (if any) online risks are most concerning to them given their role in youth protection, how they assess or become aware of these online risk experiences, and whether they see value in the possibility of using artificial intelligence (AI) as a potential solution for online risk detection. Overall, online sexual risks (e.g., sexual grooming and abuse) and cyberbullying were the most salient concern across all social service domains, especially when these experiences crossed the boundary between the digital and the physical worlds. Yet, SSPs had to rely heavily on youth self-reports to know whether and when online risks occurred, which required building a trusting relationship with youth; otherwise, SSPs became aware only after a formal investigation had been launched. Therefore, most SSPs found value in the potential for using AI as an early detection system and to monitor youth, but they were concerned that such a solution would not be feasible due to a lack of resources to adequately respond to online incidences, access to the necessary digital trace data (e.g., social media), context, and concerns about violating the trust relationships they built with youth. Thus, such automated risk detection systems should be designed and deployed with caution, as their implementation could cause youth to mistrust adults, thereby limiting the receipt of necessary guidance and support. We add to the bodies of research on adolescent online safety and the benefits and challenges of leveraging algorithmic systems in the public sector.more » « less
- 
            Instagram, one of the most popular social media platforms among youth, has recently come under scrutiny for potentially being harmful to the safety and well-being of our younger generations. Automated approaches for risk detection may be one way to help mitigate some of these risks if such algorithms are both accurate and contextual to the types of online harms youth face on social media platforms. However, the imminent switch by Instagram to end-to-end encryption for private conversations will limit the type of data that will be available to the platform to detect and mitigate such risks. In this paper, we investigate which indicators are most helpful in automatically detecting risk in Instagram private conversations, with an eye on high-level metadata, which will still be available in the scenario of end-to-end encryption. Toward this end, we collected Instagram data from 172 youth (ages 13-21) and asked them to identify private message conversations that made them feel uncomfortable or unsafe. Our participants risk-flagged 28,725 conversations that contained 4,181,970 direct messages, including textual posts and images. Based on this rich and multimodal dataset, we tested multiple feature sets (metadata, linguistic cues, and image features) and trained classifiers to detect risky conversations. Overall, we found that the metadata features (e.g., conversation length, a proxy for participant engagement) were the best predictors of risky conversations. However, for distinguishing between risk types, the different linguistic and media cues were the best predictors. Based on our findings, we provide design implications for AI risk detection systems in the presence of end-to-end encryption. More broadly, our work contributes to the literature on adolescent online safety by moving toward more robust solutions for risk detection that directly takes into account the lived risk experiences of youth.more » « less
- 
            With the prevalence of risks encountered by youth online, strength-based approaches such as nudges have been recommended as potential solutions to guide teens toward safer decisions. However, most nudging interventions to date have not been designed to cater to teens' unique needs and online safety concerns. To address this gap, this study provided a comprehensive view of adolescents' feedback on online safety nudges to inform the design of more effective online safety interventions. We conducted 12 semi-structured interviews and 3 focus group sessions with 21 teens (13 - 17 years old) via Zoom to get their feedback on three types of nudge designs from two opposing perspectives (i.e., risk victim and perpetrator) and for two different online risks (i.e., Information Breaches and Cyberbullying). Based on the teens' responses, they expressed a desire that nudges need to move beyond solely warning the user to providing a clear and effective action to take in response to the risk. They also identified key differences that affect the perception of nudges in effectively addressing an online risk, they include age, risk medium, risk awareness, and perceived risk severity. Finally, the teens identified several challenges with nudges such as them being easy to ignore, disruptive, untimely, and possibly escalating the risk. To address these, teens recommended clearer and contextualized warnings, risk prevention, and nudge personalization as solutions to ensure effective nudging. Overall, we recommend online safety nudges be designed for victim guidance while providing autonomy to control their experiences, and to ensure accountability and prevention of risk perpetrators to restrict them from causing harm.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    