skip to main content


Title: BullyAlert- A Mobile Application for Adaptive Cyberbullying Detection
Due to the prevalence and severe consequences of cyberbullying, numerous research works have focused on mining and analyzing social network data to understand cyberbullying behavior and then using the gathered insights to develop accurate classifiers to detect cyberbullying. Some recent works have been proposed to leverage the detection classifiers in a centralized cyberbullying detection system and send notifications to the concerned authority whenever a person is perceived to be victimized. However, two concerns limit the effectiveness of a centralized cyberbullying detection system. First, a centralized detection system gives a uniform severity level of alerts to everyone, even though individual guardians might have different tolerance levels when it comes to what constitutes cyberbullying. Second, the volume of data being generated by old and new social media makes it computationally prohibitive for a centralized cyberbullying detection system to be a viable solution. In this work, we propose BullyAlert, an android mobile application for guardians that allows the computations to be delegated to the hand-held devices. In addition to that, we incorporate an adaptive classification mechanism to accommodate the dynamic tolerance level of guardians when receiving cyberbullying alerts. Finally, we include a preliminary user analysis of guardians and monitored users using the data collected from BullyAlert usage.  more » « less
Award ID(s):
1816379
NSF-PAR ID:
10226740
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
International Conference on Mobile Computing Applications and Services
ISSN:
2411-7080
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Social media cyberbullying has a detrimental effect on human life. As online social networking grows daily, the amount of hate speech also increases. Such terrible content can cause depression and actions related to suicide. This paper proposes a trustable LSTM Autoencoder Network for cyberbullying detection on social media using synthetic data. We have demonstrated a cutting-edge method to address data availability difficulties by producing machine-translated data. However, several languages such as Hindi and Bangla still lack adequate investigations due to a lack of datasets. We carried out experimental identification of aggressive comments on Hindi, Bangla, and English datasets using the proposed model and traditional models, including Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (BiLSTM), LSTM-Autoencoder, Word2vec, Bidirectional Encoder Representations from Transformers (BERT), and Generative Pre-trained Transformer 2 (GPT-2) models. We employed evaluation metrics such as f1-score, accuracy, precision, and recall to assess the models’ performance. Our proposed model outperformed all the models on all datasets, achieving the highest accuracy of 95%. Our model achieves state-of-the-art results among all the previous works on the dataset we used in this paper. 
    more » « less
  2. Social service providers play a vital role in the developmental outcomes of underprivileged youth as they transition into adulthood. Educators, mental health professionals, juvenile justice officers, and child welfare caseworkers often have first-hand knowledge of the trials uniquely faced by these vulnerable youth and are charged with mitigating harmful risks, such as mental health challenges, child abuse, drug use, and sex trafficking. Yet, less is known about whether or how social service providers assess and mitigate the online risk experiences of youth under their care. Therefore, as part of the National Science Foundation (NSF) I-Corps program, we conducted interviews with 37 social service providers (SSPs) who work with underprivileged youth to determine what (if any) online risks are most concerning to them given their role in youth protection, how they assess or become aware of these online risk experiences, and whether they see value in the possibility of using artificial intelligence (AI) as a potential solution for online risk detection. Overall, online sexual risks (e.g., sexual grooming and abuse) and cyberbullying were the most salient concern across all social service domains, especially when these experiences crossed the boundary between the digital and the physical worlds. Yet, SSPs had to rely heavily on youth self-reports to know whether and when online risks occurred, which required building a trusting relationship with youth; otherwise, SSPs became aware only after a formal investigation had been launched. Therefore, most SSPs found value in the potential for using AI as an early detection system and to monitor youth, but they were concerned that such a solution would not be feasible due to a lack of resources to adequately respond to online incidences, access to the necessary digital trace data (e.g., social media), context, and concerns about violating the trust relationships they built with youth. Thus, such automated risk detection systems should be designed and deployed with caution, as their implementation could cause youth to mistrust adults, thereby limiting the receipt of necessary guidance and support. We add to the bodies of research on adolescent online safety and the benefits and challenges of leveraging algorithmic systems in the public sector. 
    more » « less
  3. Cyberbullying has become one of the most pressing online risks for adolescents and has raised serious concerns in society. Traditional efforts are primarily devoted to building a single generic classification model for all users to differentiate bullying behaviors from the normal content [6, 3, 1, 2, 4]. Despite its empirical success, these models treat users equally and inevitably ignore the idiosyncrasies of users. Recent studies from psychology and sociology suggest that the occurrence of cyberbullying has a strong connection with the personality of victims and bullies embedded in the user-generated content, and the peer influence from like-minded users. In this paper, we propose a personalized cyberbullying detection framework PI-Bully with peer influence in a collaborative environment to tailor the prediction for each individual. In particular, the personalized classifier of each individual consists of three components: a global model that captures the commonality shared by all users, a personalized model that expresses the idiosyncratic personality of each specific user, and a third component that encodes the peer influence received from like-minded users. Most of the existing methods adopt a two-stage approach: they first apply feature engineering to capture the cyberbullying patterns and then employ machine learning classifiers to detect cyberbullying behaviors. However, building a personalized cyberbullying detection framework that is customized to each individual remains a challenging task, in large part because: (1) Social media data is often sparse, noisy and high-dimensional (2) It is important to capture the commonality shared by all users as well as idiosyncratic aspects of the personality of each individual for automatic cyberbullying detection; (3) In reality, a potential victim of cyberbullying is often influenced by peers and the influences from different users could be quite diverse. Hence, it is imperative to develop a way to encode the diversity of peer influence for cyberbullying detection. To summarize, we study a novel problem of personalized cyberbullying detection with peer influence in a collaborative environment, which is able to jointly model users' common features, unique personalities and peer influence to identify cyberbullying cases. 
    more » « less
  4. null (Ed.)
    Cyberbullying is rapidly becoming one of the most serious online risks for adolescents. This has motivated work on machine learning methods to automate the process of cyberbullying detection, which have so far mostly viewed cyberbullying as one-off incidents that occur at a single point in time. Comparatively less is known about how cyberbullying behavior occurs and evolves over time. This oversight highlights a crucial open challenge for cyberbullying-related research, given that cyberbullying is typically defined as intentional acts of aggression via electronic communication that occur repeatedly and persistently . In this article, we center our discussion on the challenge of modeling temporal patterns of cyberbullying behavior. Specifically, we investigate how temporal information within a social media session, which has an inherently hierarchical structure (e.g., words form a comment and comments form a session), can be leveraged to facilitate cyberbullying detection. Recent findings from interdisciplinary research suggest that the temporal characteristics of bullying sessions differ from those of non-bullying sessions and that the temporal information from users’ comments can improve cyberbullying detection. The proposed framework consists of three distinctive features: (1) a hierarchical structure that reflects how a social media session is formed in a bottom-up manner; (2) attention mechanisms applied at the word- and comment-level to differentiate the contributions of words and comments to the representation of a social media session; and (3) the incorporation of temporal features in modeling cyberbullying behavior at the comment-level. Quantitative and qualitative evaluations are conducted on a real-world dataset collected from Instagram, the social networking site with the highest percentage of users reporting cyberbullying experiences. Results from empirical evaluations show the significance of the proposed methods, which are tailored to capture temporal patterns of cyberbullying detection. 
    more » « less
  5. Over the last decade, research has revealed the high prevalence of cyberbullying among youth and raised serious concerns in society. Information on the social media platforms where cyberbullying is most prevalent (e.g., Instagram, Facebook, Twitter) is inherently multi-modal, yet most existing work on cyberbullying identification has focused solely on building generic classification models that rely exclusively on text analysis of online social media sessions (e.g., posts). Despite their empirical success, these efforts ignore the multi-modal information manifested in social media data (e.g., image, video, user profile, time, and location), and thus fail to offer a comprehensive understanding of cyberbullying. Conventionally, when information from different modalities is presented together, it often reveals complementary insights about the application domain and facilitates better learning performance. In this paper, we study the novel problem of cyberbullying detection within a multi-modal context by exploiting social media data in a collaborative way. This task, however, is challenging due to the complex combination of both cross-modal correlations among various modalities and structural dependencies between different social media sessions, and the diverse attribute information of different modalities. To address these challenges, we propose XBully, a novel cyberbullying detection framework, that first reformulates multi-modal social media data as a heterogeneous network and then aims to learn node embedding representations upon it. Extensive experimental evaluations on real-world multi-modal social media datasets show that the XBully framework is superior to the state-of-the-art cyberbullying detection models. 
    more » « less