Serendipitous recommendations have emerged as a compelling approach to deliver users with unexpected yet valuable information, contributing to heightened user satisfaction and engagement. This survey presents an investigation of the most recent research in serendipity recommenders, with a specific emphasis on deep learning recommendation models. We categorize these models into three types, distinguishing their integration of the serendipity objective across distinct stages: pre-processing, in-processing, and post-processing. Additionally, we provide a review and summary of the serendipity definition, available ground truth datasets, and evaluation experiments employed in the field. We propose three promising avenues for future exploration: (1) leveraging user reviews to identify and explore serendipity, (2) employing reinforcement learning to construct a model for discerning appropriate timing for serendipitous recommendations, and (3) utilizing cross-domain learning to enhance serendipitous recommendations. With this review, we aim to cultivate a deeper understanding of serendipity in recommender systems and inspire further advancements in this domain.
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available January 31, 2025
-
Today’s recommender systems are criticized for recommending items that are too obvious to arouse users’ interests. Therefore the research community has advocated some ”beyond accuracy” evaluation metrics such as novelty, diversity, and serendipity with the hope of promoting information discovery and sustaining users’ interests over a long period of time. While bringing in new perspectives, most of these evaluation metrics have not considered individual users’ differences in their capacity to experience those ”beyond accuracy” items. Open-minded users may embrace a wider range of recommendations than conservative users. In this paper, we proposed to use curiosity traits to capture such individual users’ differences. We developed a model to approximate an individual’s curiosity distribution over different stimulus levels. We used an item’s surprise level to estimate the stimulus level and whether such a level is in the range of the user’s appetite for stimulus, called
Comfort Zone . We then proposed a recommender system framework that considers both user preference and theirComfort Zone where the curiosity is maximally aroused. Our framework differs from a typical recommender system in that it leverages human’sComfort Zone for stimuli to promote engagement with the system. A series of evaluation experiments have been conducted to show that our framework is able to rank higher the items with not only high ratings but also high curiosity stimulation. The recommendation list generated by our algorithm has higher potential of inspiring user curiosity compared to the state-of-the-art deep learning approaches. The personalization factor for assessing the surprise stimulus levels further helps the recommender model achieve smaller (better) inter-user similarity. -
Serendipity is a notion that means an unexpected but valuable discovery. Due to its elusive and subjective nature, serendipity is difficult to study even with today's advances in machine learning and deep learning techniques. Both ground truth data collecting and model developing are the open research questions. This paper addresses both the data and the model challenges for identifying serendipity in recommender systems. For the ground truth data collecting, it proposes a new and scalable approach by using both user generated reviews and a crowd sourcing method. The result is a large-scale ground truth data on serendipity. For model developing, it designed a self-enhanced module to learn the fine-grained facets of serendipity in order to mitigate the inherent data sparsity problem in any serendipity ground truth dataset. The self-enhanced module is general enough to be applied with many base deep learning models for serendipity. A series of experiments have been conducted. As the result, a base deep learning model trained on our collected ground truth data, as well as with the help of the self-enhanced module, outperforms the state-of-the-art baseline models in predicting serendipity.more » « less
-
null (Ed.)As the popularity of online travel platforms increases, users tend to make ad-hoc decisions on places to visit rather than preparing the detailed tour plans in advance. Under the situation of timeliness and uncertainty of users’ demand, how to integrate real-time context into a dynamic and personalized recommendations have become a key issue in travel recommender system. In this paper, by integrating the users’ historical preferences and real-time context, a location-aware recommender system called TRACE (Travel Reinforcement Recommendations Based on Location-Aware Context Extraction) is proposed. It captures users’ features based on location-aware context learning model, and makes dynamic recommendations based on reinforcement learning. Specifically, this research: (1) designs a travel reinforcing recommender system based on an Actor-Critic framework, which can dynamically track the user preference shifts and optimize the recommender system performance; (2) proposes a location-aware context learning model, which aims at extracting user context from real-time location and then calculating the impacts of nearby attractions on users’ preferences; and (3) conducts both offline and online experiments. Our proposed model achieves the best performance in both of the two experiments, which demonstrates that tracking the users’ preference shifts based on real-time location is valuable for improving the recommendation results.more » « less
-
null (Ed.)Network embedding has demonstrated effective empirical performance for various network mining tasks such as node classification, link prediction, clustering, and anomaly detection. However, most of these algorithms focus on the single-view network scenario. From a real-world perspective, one individual node can have different connectivity patterns in different networks. For example, one user can have different relationships on Twitter, Facebook, and LinkedIn due to varying user behaviors on different platforms. In this case, jointly considering the structural information from multiple platforms (i.e., multiple views) can potentially lead to more comprehensive node representations, and eliminate noises and bias from a single view. In this paper, we propose a view-adversarial framework to generate comprehensive and robust multi-view network representations named VANE, which is based on two adversarial games. The first adversarial game enhances the comprehensiveness of the node representation by discriminating the view information which is obtained from the subgraph induced by neighbors of that node. The second adversarial game improves the robustness of the node representation with the challenging of fake node representations from the generative adversarial net. We conduct extensive experiments on downstream tasks with real-world multi-view networks, which shows that our proposed VANE framework significantly outperforms other baseline methods.more » « less
-
ABSTRACT A key to collaborative decision making is to aggregate individual evaluations into a group decision. One of its fundamental challenges lies in the difficulty in identifying and dealing with irregular or unfair ratings and reducing their impact on group decisions. Little research has attempted to identify irregular ratings in a collaborative assessment task, let alone develop effective approaches to reduce their negative impact on the final group judgment. In this article, based on the synergy theory, we propose a novel consensus‐based collaborative evaluation (CE) method called Collaborative Evaluation based on rating DIFFerence (CE‐DIFF) for identifying irregular ratings and mitigating their impact on collaborative decisions. CE‐DIFF determines and assigns different weights automatically to individual evaluators or ratings based on the level of consistency of one's ratings with the group assessment outcome through continuous iterations. We conducted two empirical experiments to evaluate the proposed method. The results show that CE‐DIFF has higher accuracy in dealing with irregular ratings than existing CE methods, such as arithmetic mean and trimmed mean. In addition, the effectiveness of CE‐DIFF is independent of group size. This study provides a new and more effective method for collaborative assessment, as well as novel theoretical insights and practical implications on how to improve collaborative assessment.