skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Targeted Poisoning Attacks on Social Recommender Systems
With the popularity of online social networks, social recommendations that rely on one's social connections to make personalized recommendations have become possible. This introduces vulnerabilities for an adversarial party to compromise the recommendations for users by utilizing their social connections. In this paper, we propose the targeted poisoning attack on the factorization-based social recommender system in which the attacker aims to promote an item to a group of target users by injecting fake ratings and social connections. We formulate the optimal poisoning attack as a bi-level program and develop an efficient algorithm to find the optimal attacking strategy. We then evaluate the proposed attacking strategy on real-world dataset and demonstrate that the social recommender system is sensitive to the targeted poisoning attack. We find that users in the social recommender system can be attacked even if they do not have direct social connections with the attacker.  more » « less
Award ID(s):
1850523
PAR ID:
10183032
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2019 IEEE Global Communications Conference (GLOBECOM)
Page Range / eLocation ID:
1 to 6
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. As widely used in data-driven decision-making, recommender systems have been recognized for their capabilities to provide users with personalized services in many user-oriented online services, such as E-commerce (e.g., Amazon, Taobao, etc.) and Social Media sites (e.g., Facebook and Twitter). Recent works have shown that deep neural networks-based recommender systems are highly vulnerable to adversarial attacks, where adversaries can inject carefully crafted fake user profiles (i.e., a set of items that fake users have interacted with) into a target recommender system to promote or demote a set of target items. Instead of generating users with fake profiles from scratch, in this paper, we introduce a novel strategy to obtain “fake” user profiles via copying cross-domain user profiles, where a reinforcement learning-based black-box attacking framework (CopyAttack+) is developed to effectively and efficiently select cross-domain user profiles from the source domain to attack the target system. Moreover, we propose to train a local surrogate system for mimicking adversarial black-box attacks in the source domain, so as to provide transferable signals with the purpose of enhancing the attacking strategy in the target black-box recommender system. Comprehensive experiments on three real-world datasets are conducted to demonstrate the effectiveness of the proposed attacking framework. 
    more » « less
  2. This paper introduces an innovative approach to recommender systems through the development of an explainable architecture that leverages large language models (LLMs) and prompt engineering to provide natural language explanations. Traditional recommender systems often fall short in offering personalized, transparent explanations, particularly for users with varying levels of digital literacy. Focusing on the Advisor Recommender System, our proposed system integrates the conversational capabilities of modern AI to deliver clear, context-aware explanations for its recommendations. This research addresses key questions regarding the incorporation of LLMs into social recommender systems, the impact of natural language explanations on user perception, and the specific informational needs users prioritize in such interactions. A pilot study with 11 participants reveals insights into the system’s usability and the effectiveness of explanation clarity. Our study contributes to the broader human-AI interaction literature by outlining a novel system architecture, identifying user interaction patterns, and suggesting directions for future enhancements to improve decision-making processes in AI-driven recommendations. 
    more » « less
  3. Modern machine learning underpins a large variety of commercial software products, including many cybersecurity solutions. Widely different models, from large transformers trained for auto-regressive natural language modeling to gradient boosting forests designed to recognize malicious software, all share a common element: they are trained on an ever increasing quantity of data to achieve impressive performance levels in their tasks. Consequently, the training phase of modern machine learning systems holds dual significance: it is pivotal in achieving the expected high-performance levels of these models, and concurrently, it presents a prime attack surface for adversaries striving to manipulate the behavior of the final trained system. This dissertation explores the complexities and hidden dangers of training supervised machine learning models in an adversarial setting, with a particular focus on models designed for cybersecurity tasks. Guided by the belief that an accurate understanding of the offensive capabilities of the adversary is the cornerstone on which to found any successful defensive strategy, the bulk of this thesis is composed by the introduction of novel training-time attacks. We start by proposing training-time attack strategies that operate in a clean-label regime, requiring minimal adversarial control over the training process, allowing the attacker to subvert the victim model’s prediction through simple poisoned data dissemination. Leveraging the characteristics of the data domain and model explanation techniques, we craft training data perturbations that stealthily subvert malicious software classifiers. We then shift the focus of our analysis on the long-standing problem of network flow traffic classification. In this context we develop new poisoning strategies that work around the constraints of the data domain through different strategies, including generative modeling. Finally, we examine unusual attack vectors, when the adversary is capable of tampering with different elements of the training process, such as the network connections during a federated learning protocol. We show that such an attacker can induce targeted performance degradation through strategic network interference, while maintaining stable the performance of the victim model on other data instances. We conclude by investigating mitigation techniques designed to target these insidious clean-label backdoor attacks in the cybersecurity domain. 
    more » « less
  4. We present a progressive approximation algorithm for the exact solution of several classes of interdiction games in which two noncooperative players (namely an attacker and a follower) interact sequentially. The follower must solve an optimization problem that has been previously perturbed by means of a series of attacking actions led by the attacker. These attacking actions aim at augmenting the cost of the decision variables of the follower’s optimization problem. The objective, from the attacker’s viewpoint, is that of choosing an attacking strategy that reduces as much as possible the quality of the optimal solution attainable by the follower. The progressive approximation mechanism consists of the iterative solution of an interdiction problem in which the attacker actions are restricted to a subset of the whole solution space and a pricing subproblem invoked with the objective of proving the optimality of the attacking strategy. This scheme is especially useful when the optimal solutions to the follower’s subproblem intersect with the decision space of the attacker only in a small number of decision variables. In such cases, the progressive approximation method can solve interdiction games otherwise intractable for classical methods. We illustrate the efficiency of our approach on the shortest path, 0-1 knapsack and facility location interdiction games. Summary of Contribution: In this article, we present a progressive approximation algorithm for the exact solution of several classes of interdiction games in which two noncooperative players (namely an attacker and a follower) interact sequentially. We exploit the discrete nature of this interdiction game to design an effective algorithmic framework that improves the performance of general-purpose solvers. Our algorithm combines elements from mathematical programming and computer science, including a metaheuristic algorithm, a binary search procedure, a cutting-planes algorithm, and supervalid inequalities. Although we illustrate our results on three specific problems (shortest path, 0-1 knapsack, and facility location), our algorithmic framework can be extended to a broader class of interdiction problems. 
    more » « less
  5. Model-serving systems have become increasingly popular, especially in real-time web applications. In such systems, users send queries to the server and specify the desired performance metrics (e.g., desired accuracy, latency). The server maintains a set of models (model zoo) in the back-end and serves the queries based on the specified metrics. This paper examines the security, specifically robustness against model extraction attacks, of such systems. Existing black-box attacks assume a single model can be repeatedly selected for serving inference requests. Modern inference serving systems break this assumption. Thus, they cannot be directly applied to extract a victim model, as models are hidden behind a layer of abstraction exposed by the serving system. An attacker can no longer identify which model she is interacting with. To this end, we first propose a query-efficient fingerprinting algorithm to enable the attacker to trigger any desired model consistently. We show that by using our fingerprinting algorithm, model extraction can have fidelity and accuracy scores within 1% of the scores obtained when attacking a single, explicitly specified model, as well as up to 14.6% gain in accuracy and up to 7.7% gain in fidelity compared to the naive attack. Second, we counter the proposed attack with a noise-based defense mechanism that thwarts fingerprinting by adding noise to the specified performance metrics. The proposed defense strategy reduces the attack's accuracy and fidelity by up to 9.8% and 4.8%, respectively (on medium-sized model extraction). Third, we show that the proposed defense induces a fundamental trade-off between the level of protection and system goodput, achieving configurable and significant victim model extraction protection while maintaining acceptable goodput (>80%). We implement the proposed defense in a real system with plans to open source. 
    more » « less