As online social networks (OSNs) become more prevalent, a new paradigm for problem-solving through crowd-sourcing has emerged. By leveraging the OSN platforms, users can post a problem to be solved and then form a team to collaborate and solve the problem. A common concern in OSNs is how to form effective collaborative teams, as various tasks are completed through online collaborative networks. A team’s diversity in expertise has received high attention to producing high team performance in developing team formation (TF) algorithms. However, the effect of team diversity on performance under different types of tasks has not been extensively studied. Another important issue is how to balance the need to preserve individuals’ privacy with the need to maximize performance through active collaboration, as these two goals may conflict with each other. This research has not been actively studied in the literature. In this work, we develop a team formation (TF) algorithm in the context of OSNs that can maximize team performance and preserve team members’ privacy under different types of tasks. Our proposed
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
PR ivA cy-D iversity-A wareT eamF ormation framework, calledPRADA-TF , is based on trust relationships between users in OSNs where trust is measured based on a user’s expertise and privacy preference levels. The PRADA-TF algorithm considers the team members’ domain expertise, privacy preferences, and the team’s expertise diversity in the process of team formation. Our approach employs game-theoretic principlesMechanism Design to motivate self-interested individuals within a team formation context, positioning the mechanism designer as the pivotal team leader responsible for assembling the team. We use two real-world datasets (i.e., Netscience and IMDb) to generate different semi-synthetic datasets for constructing trust networks using a belief model (i.e., Subjective Logic) and identifying trustworthy users as candidate team members. We evaluate the effectiveness of our proposedPRADA-TF scheme in four variants against three baseline methods in the literature. Our analysis focuses on three performance metrics for studying OSNs: social welfare, privacy loss, and team diversity.Free, publicly-accessible full text available June 5, 2025 -
Free, publicly-accessible full text available May 6, 2025
-
Cybergrooming emerges as a growing threat to adolescent safety and mental health. One way to combat cybergrooming is to leverage predictive artificial intelligence (AI) to detect predatory behaviors in social media. However, these methods can encounter challenges like false positives and negative implications such as privacy concerns. Another complementary strategy involves using generative artificial intelligence to empower adolescents by educating them about predatory behaviors. To this end, we envision developing state-of-the-art conversational agents to simulate the conversations between adolescents and predators for educational purposes. Yet, one key challenge is the lack of a dataset to train such conversational agents. In this position paper, we present our motivation for empowering adolescents to cope with cybergrooming. We propose to develop large-scale, authentic datasets through an online survey targeting adolescents and parents. We discuss some initial background behind our motivation and proposed design of the survey, such as situating the participants in artificial cybergrooming scenarios, then allowing participants to respond to the survey to obtain their authentic responses. We also present several open questions related to our proposed approach and hope to discuss them with the workshop attendees.more » « lessFree, publicly-accessible full text available May 1, 2025
-
Free, publicly-accessible full text available May 11, 2025