skip to main content

Search for: All records

Creators/Authors contains: "Razi, Afsaneh"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Online sexual risks pose a serious and frequent threat to adolescents’ online safety. While significant work is done within the HCI community to understand teens’ sexual experiences through public posts, we extend their research by qualitatively analyzing 156 private Instagram conversations flagged by 58 adolescents to understand the characteristics of sexual risks faced with strangers, acquaintances, and friends. We found that youth are often victimized by strangers through sexual solicitation/harassment as well as sexual spamming via text and visual media, which is often ignored by them. In contrast, adolescents’ played mixed roles with acquaintances, as they were often victims of sexual harassment, but sometimes engaged in sexting, or interacted by rejecting sexual requests from acquaintances. Lastly, adolescents were never recipients of sexual risks with their friends, as they mostly mutually participated in sexting or sexual spamming. Based on these results, we provide our insights and recommendations for future researchers. Trigger Warning: This paper contains explicit language and anonymized private sexual messages. Reader discretion advised.
    Free, publicly-accessible full text available November 8, 2023
  2. In this work, we present a case study on an Instagram Data Donation (IGDD) project, which is a user study and web-based platform for youth (ages 13-21) to donate and annotate their Instagram data with the goal of improving adolescent online safety. We employed human-centered design principles to create an ecologically valid dataset that will be utilized to provide insights from teens’ private social media interactions and train machine learning models to detect online risks. Our work provides practical insights and implications for Human-Computer Interaction (HCI) researchers that collect and study social media data to address sensitive problems relating to societal good.
    Free, publicly-accessible full text available April 27, 2023
  3. We collected Instagram Direct Messages (DMs) from 100 adolescents and young adults (ages 13-21) who then flagged their own conversations as safe or unsafe. We performed a mixed-method analysis of the media files shared privately in these conversations to gain human-centered insights into the risky interactions experienced by youth. Unsafe conversations ranged from unwanted sexual solicitations to mental health related concerns, and images shared in unsafe conversations tended to be of people and convey negative emotions, while those shared in regular conversations more often conveyed positive emotions and contained objects. Further, unsafe conversations were significantly shorter, suggesting that youth disengaged when they felt unsafe. Our work uncovers salient characteristics of safe and unsafe media shared in private conversations and provides the foundation to develop automated systems for online risk detection and mitigation.
    Free, publicly-accessible full text available April 27, 2023
  4. Cyberbullying is a growing problem across social media platforms, inflicting short and long-lasting effects on victims. To mitigate this problem, research has looked into building automated systems, powered by machine learning, to detect cyberbullying incidents, or the involved actors like victims and perpetrators. In the past, systematic reviews have examined the approaches within this growing body of work, but with a focus on the computational aspects of the technical innovation, feature engineering, or performance optimization, without centering around the roles, beliefs, desires, or expectations of humans. In this paper, we present a human-centered systematic literature review of the past 10 years of research on automated cyberbullying detection. We analyzed 56 papers based on a three-prong human-centeredness algorithm design framework - spanning theoretical, participatory, and speculative design. We found that the past literature fell short of incorporating human-centeredness across multiple aspects, ranging from defining cyberbullying, establishing the ground truth in data annotation, evaluating the performance of the detection models, to speculating the usage and users of the models, including potential harms and negative consequences. Given the sensitivities of the cyberbullying experience and the deep ramifications cyberbullying incidents bear on the involved actors, we discuss takeaways on how incorporating human-centeredness in futuremore »research can aid with developing detection systems that are more practical, useful, and tuned to the diverse needs and contexts of the stakeholders.« less
  5. The goal of this one-day workshop is to build an active community of researchers, practitioners, and policy-makers who are jointly committed to leveraging human-centered artificial intelligence (HCAI) to make the internet a safer place for youth. This community will be founded on the principles of open innovation and human dignity to address some of the most salient safety issues of modern-day internet, including online harassment, sexual solicitation, and the mental health of vulnerable internet users, particularly adolescents and young adults. We will partner with Mozilla Research Foundation to launch a new open project named “,” which will serve as a platform for code library, research, and data contributions that support the mission of internet safety. During the workshop, we will discuss: 1) the types of contributions and technical standards needed to advance the state-of-the art in online risk detection, 2) the practical, legal, and ethical challenges that we will face, and 3) ways in which we can overcome these challenges through the use of HCAI to create a sustainable community. An end goal of creating the MOSafely community is to offer evidence-based, customizable, robust, and low-cost technologies that are accessible to the public for youth protection.
  6. We licensed a dataset from a mental health peer support platform catering mainly to teens and young adults. We anonymized the name of this platform to protect the individuals on our dataset. On this platform, users can post content and comment on others’ posts. Interactions are semi-anonymous: users share a photo and screen name with others. They have the option to post with their username visible or anonymously. The platform is moderated, but the ratio of moderators to posters is low (0.00007). The original dataset included over 5 million posts and 15 million comments from 2011- 2017. It was scaled to a feasible size for qualitative analysis by running a query to identify posts by a) adolescents aged 13-17 that were seeking support for b) online sexual experiences (not offline) with people they know (not strangers).
  7. As adolescents' engagement increases online, it becomes more essential to provide a safe environment for them. Although some apps and systems are available for keeping teens safer online, these approaches and apps do not consider the needs of parents and teens. We would like to improve adolescent online sexual risk detection algorithms. In order to do so, I'll conduct three research studies for my dissertation: 1) Qualitative analysis on teens posts on an online peer support platform about online sexual risks in order to gain deep understanding of online sexual risks 2) Train a machine learning approach to detect sexual risks based on teens conversations with sex offenders 3) develop a machine learning algorithm for detecting online sexual risks specialized for adolescents.
  8. We conducted an exploratory interview study with 10 undergraduate college students (ages 18-21) to get their feedback on how to best design a research study that asks teens (ages 13-17) to share portions of their Instagram data with their parents and discuss their online risk experiences. These young adults felt that teens should have as much control as possible when sharing their data, including the way that it was used in discussions with their parents. Our findings highlight the need to ensure researchers preserve the privacy and confidentiality of teens’ social media data.