Over the past two decades, innovations powered by artificial intelligence (AI) have extended into nearly all facets of human experience. Our ethnographic research suggests that while young people sense they can't “trust” AI, many are not sure how it works or how much control they have over its growing role in their lives. In this study, we attempt to answer the following questions: 1) What can we learn about young people's understandings of AI when they produce media with and about it? 2) What are the design features of an ethics-centered pedagogy that promotes STEM engagement via AI? To answer these questions, we co-developed and documented three projects at YR Media, a national network of youth journalists and artists who create multimedia for public distribution. Participants are predominantly youth of color and those contending with economic and other barriers to full participation in STEM fields. Findings showed that by creating a learning ecology that centered the cultures and experiences of its learners while leveraging familiar tools for critical analysis, youth deepened their understanding of AI. Our study also showed that providing opportunities for youth to produce ethics-centered Interactive stories interrogating invisibilized AI functionalities, and to release those stories to the public, empowered them to creatively express their understandings and apprehensions about AI.
more »
« less
Youths' Perceptions of Data Collection in Online Advertising and Social Media
This project illuminates what data youth believe online advertisers and social media companies collect about them. We situate these findings within the context of current advertising regulations and compare youth beliefs with what data social media companies report collecting based on their privacy policies. Through interviews with 21 youth ages 10-17 in the United States, we learn that participants are largely aware of how their interactions on the website or app are used to inform personalized content. However, certain types of information like geolocation or how long data is retained is less clear to them. We also learn about what school and family factors influence youth to adopt apps and websites. This work has implications for design and policy related to companies' personal data collection and targeted advertising, especially for youth.
more »
« less
- PAR ID:
- 10417762
- Date Published:
- Journal Name:
- Proceedings of the ACM on Human-Computer Interaction
- Volume:
- 6
- Issue:
- CSCW2
- ISSN:
- 2573-0142
- Page Range / eLocation ID:
- 1 to 27
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Algorithmic systems help manage the governance of digital platforms featuring user-generated content, including how money is distributed to creators from the profits a platform earns from advertising on this content. However, creators producing content about disadvantaged populations have reported that these kinds of systems are biased, having associated their content with prohibited or unsafe content, leading to what creators believed were error-prone decisions to demonetize their videos. Motivated by these reports, we present the results of 20 interviews with YouTube creators and a content analysis of videos, tweets, and news about demonetization cases to understand YouTubers' perceptions of demonetization affecting videos featuring disadvantaged or vulnerable populations, as well as creator responses to demonetization, and what kinds of tools and infrastructure support they desired. We found creators had concerns about YouTube's algorithmic system stereotyping content featuring vulnerable demographics in harmful ways, for example by labeling it "unsafe'' for children or families -- creators believed these demonetization errors led to a range of economic, social, and personal harms. To provide more context to these findings, we analyzed and report on the technique a few creators used to audit YouTube's algorithms to learn what could cause the demonetization of videos featuring LGBTQ people, culture and/or social issues. In response to the varying beliefs about the causes and harms of demonetization errors, we found our interviewees wanted more reliable information and statistics about demonetization cases and errors, more control over their content and advertising, and better economic security.more » « less
-
Social service providers play a vital role in the developmental outcomes of underprivileged youth as they transition into adulthood. Educators, mental health professionals, juvenile justice officers, and child welfare caseworkers often have first-hand knowledge of the trials uniquely faced by these vulnerable youth and are charged with mitigating harmful risks, such as mental health challenges, child abuse, drug use, and sex trafficking. Yet, less is known about whether or how social service providers assess and mitigate the online risk experiences of youth under their care. Therefore, as part of the National Science Foundation (NSF) I-Corps program, we conducted interviews with 37 social service providers (SSPs) who work with underprivileged youth to determine what (if any) online risks are most concerning to them given their role in youth protection, how they assess or become aware of these online risk experiences, and whether they see value in the possibility of using artificial intelligence (AI) as a potential solution for online risk detection. Overall, online sexual risks (e.g., sexual grooming and abuse) and cyberbullying were the most salient concern across all social service domains, especially when these experiences crossed the boundary between the digital and the physical worlds. Yet, SSPs had to rely heavily on youth self-reports to know whether and when online risks occurred, which required building a trusting relationship with youth; otherwise, SSPs became aware only after a formal investigation had been launched. Therefore, most SSPs found value in the potential for using AI as an early detection system and to monitor youth, but they were concerned that such a solution would not be feasible due to a lack of resources to adequately respond to online incidences, access to the necessary digital trace data (e.g., social media), context, and concerns about violating the trust relationships they built with youth. Thus, such automated risk detection systems should be designed and deployed with caution, as their implementation could cause youth to mistrust adults, thereby limiting the receipt of necessary guidance and support. We add to the bodies of research on adolescent online safety and the benefits and challenges of leveraging algorithmic systems in the public sector.more » « less
-
Social media companies wield power over their users through design, policy, and through their participation in public discourse. We set out to understand how companies leverage public relations to influence expectations of privacy and privacy-related norms. To interrogate the discourse productions of companies in relation to privacy, we examine the blogs associated with three major social media platforms: Facebook, Instagram (both owned by Facebook Inc.), and Snapchat. We analyze privacy-related posts using critical discourse analysis to demonstrate how these powerful entities construct narratives about users and their privacy expectations. We find that each of these platforms often make use of discourse about "vulnerable" identities to invoke relations of power, while at the same time, advancing interpretations and values that favor data capitalism. Finally, we discuss how these public narratives might influence the construction of users' own interpretations of appropriate privacy norms and conceptions of self. We contend that expectations of privacy and social norms are not simply artifacts of users' own needs and desires, but co-constructions that reflect the influence of social media companies themselves.more » « less
-
Although youth increasingly communicate with peers online, we know little about how private online channels play a role in providing a supportive environment for youth. To fill this gap, we asked youth to donate their Instagram Direct Messages and filtered them by the phrase “help me.” From this query, we analyzed 82 conversations comprised of 336,760 messages that 42 participants donated. These threads often began as casual conversations among friends or lovers they met offline or online. The conversations evolved into sharing negative experiences about everyday stress (e.g., school, dating) to severe mental health disclosures (e.g., suicide). Disclosures were usually reciprocated with relatable experiences and positive peer support. We also discovered unsupport as a theme, where conversation members denied giving support, a unique finding in the online social support literature. We discuss the role of social media-based private channels and their implications for design in supporting youth’s mental health. Content Warning: This paper includes sensitive topics, including self-harm and suicide ideation. Reader discretion is advised.more » « less