Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Cybergrooming is a form of online abuse that threatens teens’ mental health and physical safety. Yet, most prior work has focused on detecting perpetrators’ behaviors, leaving a limited understanding of how teens might respond to such unwanted advances. To address this gap, we conducted an online survey with 74 participants—51 parents and 23 teens—who responded to simulated cybergrooming scenarios in two ways: responses that they think would make teens more vulnerable or resilient to unwanted sexual advances. Through a mixed-methods analysis, we identified four types of vulnerable responses (encouraging escalation, accepting an advance, displaying vulnerability, and negating risk concern) and four types of protective strategies (setting boundaries, directly declining, signaling risk awareness, and leveraging avoidance techniques). As the cybergrooming risk escalated, both vulnerable responses and protective strategies showed a corresponding progression. This study contributes a teen-centered understanding of cybergrooming, a labeled dataset, and a stage-based taxonomy of perceived protective strategies, while offering implications for educational programs and sociotechnical interventions.more » « less
-
Cybergrooming is a form of online abuse that threatens teens’ mental health and physical safety. Yet, most prior work has focused on detecting perpetrators’ behaviors, leaving a limited understanding of how teens might respond to such unwanted advances. To address this gap, we conducted an online survey with 74 participants—51 parents and 23 teens—who responded to simulated cybergrooming scenarios in two ways: responses that they think would make teens more vulnerable or resilient to unwanted sexual advances. Through a mixed-methods analysis, we identified four types of vulnerable responses (encouraging escalation, accepting an advance, displaying vulnerability, and negating risk concern) and four types of protective strategies (setting boundaries, directly declining, signaling risk awareness, and leveraging avoidance techniques). As the cybergrooming risk escalated, both vulnerable responses and protective strategies showed a corresponding progression. This study contributes a teen-centered understanding of cybergrooming, a labeled dataset, and a stage-based taxonomy of perceived protective strategies, while offering implications for educational programs and sociotechnical interventions.more » « less
-
Recent advancements in Large Language Models (LLMs) have enabled them to approach human-level persuasion capabilities. However, such potential also raises concerns about the safety risks of LLM-driven persuasion, particularly their potential for unethical influence through manipulation, deception, exploitation of vulnerabilities, and many other harmful tactics. In this work, we present a systematic investigation of LLM persuasion safety through two critical aspects: (1) whether LLMs appropriately reject unethical persuasion tasks and avoid unethical strategies during execution, including cases where the initial persuasion goal appears ethically neutral, and (2) how influencing factors like personality traits and external pressures affect their behavior. To this end, we introduce PERSUSAFETY, the first comprehensive framework for the assessment of persuasion safety, which consists of three stages, i.e., persuasion scene creation, persuasive conversation simulation, and persuasion safety assessment. PERSUSAFETY covers 6 diverse unethical persuasion topics and 15 common unethical strategies. Through extensive experiments across 8 widely used LLMs, we observe significant safety concerns in most LLMs, including failing to identify harmful persuasion tasks and leveraging various unethical persuasion strategies. Our study calls for more attention to improve safety alignment in progressive and goal-driven conversations such as persuasion.more » « less
-
This paper investigates the safety risks of large language models (LLMs) in goal-driven persuasive conversations. We introduce PERSUSAFETY, a framework for systematically evaluating whether LLMs refuse unethical persuasion tasks and whether they employ manipulative strategies during multi-turn dialogues. The framework includes three stages: persuasion task generation, simulated persuasive conversations between LLM agents, and safety assessment of refusal behavior and unethical strategy use. Across experiments with eight widely used LLMs, we find that many models fail to consistently reject harmful persuasion tasks and frequently deploy unethical tactics such as deception and manipulative emotional appeals. Results also show that models increase unethical strategies when they are aware of user vulnerabilities and under situational pressures. These findings highlight important gaps in current alignment approaches and underscore the need for improved safeguards when deploying LLMs as persuasive agents.more » « less
-
As online social networks (OSNs) become more prevalent, a new paradigm for problem-solving through crowd-sourcing has emerged. By leveraging the OSN platforms, users can post a problem to be solved and then form a team to collaborate and solve the problem. A common concern in OSNs is how to form effective collaborative teams, as various tasks are completed through online collaborative networks. A team’s diversity in expertise has received high attention to producing high team performance in developing team formation (TF) algorithms. However, the effect of team diversity on performance under different types of tasks has not been extensively studied. Another important issue is how to balance the need to preserve individuals’ privacy with the need to maximize performance through active collaboration, as these two goals may conflict with each other. This research has not been actively studied in the literature. In this work, we develop a team formation (TF) algorithm in the context of OSNs that can maximize team performance and preserve team members’ privacy under different types of tasks. Our proposedPRivAcy-Diversity-AwareTeamFormation framework, calledPRADA-TF, is based on trust relationships between users in OSNs where trust is measured based on a user’s expertise and privacy preference levels. The PRADA-TF algorithm considers the team members’ domain expertise, privacy preferences, and the team’s expertise diversity in the process of team formation. Our approach employs game-theoretic principlesMechanism Designto motivate self-interested individuals within a team formation context, positioning the mechanism designer as the pivotal team leader responsible for assembling the team. We use two real-world datasets (i.e., Netscience and IMDb) to generate different semi-synthetic datasets for constructing trust networks using a belief model (i.e., Subjective Logic) and identifying trustworthy users as candidate team members. We evaluate the effectiveness of our proposedPRADA-TFscheme in four variants against three baseline methods in the literature. Our analysis focuses on three performance metrics for studying OSNs: social welfare, privacy loss, and team diversity.more » « less
-
Cybergrooming emerges as a growing threat to adolescent safety and mental health. One way to combat cybergrooming is to leverage predictive artificial intelligence (AI) to detect predatory behaviors in social media. However, these methods can encounter challenges like false positives and negative implications such as privacy concerns. Another complementary strategy involves using generative artificial intelligence to empower adolescents by educating them about predatory behaviors. To this end, we envision developing state-of-the-art conversational agents to simulate the conversations between adolescents and predators for educational purposes. Yet, one key challenge is the lack of a dataset to train such conversational agents. In this position paper, we present our motivation for empowering adolescents to cope with cybergrooming. We propose to develop large-scale, authentic datasets through an online survey targeting adolescents and parents. We discuss some initial background behind our motivation and proposed design of the survey, such as situating the participants in artificial cybergrooming scenarios, then allowing participants to respond to the survey to obtain their authentic responses. We also present several open questions related to our proposed approach and hope to discuss them with the workshop attendees.more » « less
An official website of the United States government

Full Text Available