skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Measuring the benefit of increased transparency and control in news recommendation
Abstract Personalized news experiences powered by recommender systems permeate our lives and have the potential to influence not only our opinions, but also our decisions. At the same time, the content and viewpoints contained within news recommendations are driven by multiple factors, including both personalization and editorial selection. Explanations could help users gain a better understanding of the factors contributing to the news items selected for them to read. Indeed, recent works show that explanations are essential for users of news recommenders to understand their consumption preferences and set intentions in line with their goals, such as goals for knowledge development and increased diversity of content or viewpoints. We give examples of such works on explanation and interactive interface interventions which have been effective in influencing readers' consumption intentions and behaviors in news recommendations. However, the state‐of‐the‐art in news recommender systems currently fall short in terms of evaluating such interventions in live systems, limiting our ability to measure their true impact on user behavior and opinions. To help understand the true benefit of these interfaces, we therefore call for improving the realism of studies for news.  more » « less
Award ID(s):
2045153 2232552
PAR ID:
10515673
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
AI Magazine
Volume:
45
Issue:
2
ISSN:
0738-4602
Format(s):
Medium: X Size: p. 212-226
Size(s):
p. 212-226
Sponsoring Org:
National Science Foundation
More Like this
  1. Generative AI, particularly Large Language Models (LLMs), has revolutionized human-computer interaction by enabling the generation of nuanced, human-like text. This presents new opportunities, especially in enhancing explainability for AI systems like recommender systems, a crucial factor for fostering user trust and engagement. LLM-powered AI-Chatbots can be leveraged to provide personalized explanations for recommendations. Although users often find these chatbot explanations helpful, they may not fully comprehend the content. Our research focuses on assessing how well users comprehend these explanations and identifying gaps in understanding. We also explore the key behavioral differences between users who effectively understand AI-generated explanations and those who do not. We designed a three-phase user study with 17 participants to explore these dynamics. The findings indicate that the clarity and usefulness of the explanations are contingent on the user asking relevant follow-up questions and having a motivation to learn. Comprehension also varies significantly based on users’ educational backgrounds. 
    more » « less
  2. null (Ed.)
    The growing amount of online information today has increased opportunity to discover interesting and useful information. Various recommender systems have been designed to help people discover such information. No matter how accurately the recommender algorithms perform, users’ engagement with recommended results has been complained being less than ideal. In this study, we touched on two human-centered objectives for recommender systems: user satisfaction and curiosity, both of which are believed to play roles in maintaining user engagement and sustain such engagement in the long run. Specifically, we leveraged the concept of surprise and used an existing computational model of surprise to identify relevantly surprising health articles aiming at improving user satisfaction and inspiring their curiosity. We designed a user study to first test the validity of the surprise model in a health news recommender system, called LuckyFind. Then user satisfaction and curiosity were evaluated. We find that the computational surprise model helped identify surprising recommendations at little cost of user satisfaction. Users gave higher ratings on interestingness than usefulness for those surprising recommendations. Curiosity was inspired more for those individuals who have a larger capacity to experience curiosity. Over half of the users have changed their preferences after using LuckyFind, either discovering new areas, reinforcing their existing interests, or stopping following those they did not want anymore. The insights of the research will make researchers and practitioners rethink the objectives of today’s recommender systems as being more human-centered beyond algorithmic accuracy. 
    more » « less
  3. While substantial advances have been made in recommender systems -- both in general and for news -- using datasets, offline analyses, and one-shot experiments, longitudinal studies of real users remain the gold standard, and the only way to effectively measure the impact of recommender system designs (algorithmic and otherwise) on long-term user experience and behavior. While such infrastructure exists for studies within some individual organizations, the extensive cost and effort to build the systems, content streams, and user base make it prohibitive for most researchers to conduct such studies. We propose to develop shared research infrastructure for the research community, and have received funding to gather community input on requirements, resources, and research goals for such an infrastructure. If the full infrastructure proposal is funded, it would result in recruiting a community of thousands of users who agree to use a news delivery application within which various researchers would be install and conduct experiments. In this short paper we outline what we have heard and learned so far and present a set of questions to be directed to INRA attendees to gather their feedback at the workshop. 
    more » « less
  4. null (Ed.)
    Though recommender systems are defined by personalization, recent work has shown the importance of additional, beyond-accuracy objectives, such as fairness. Because users often expect their recommendations to be purely personalized, these new algorithmic objectives must be communicated transparently in a fairness-aware recommender system. While explanation has a long history in recommender systems research, there has been little work that attempts to explain systems that use a fairness objective. Even though the previous work in other branches of AI has explored the use of explanations as a tool to increase fairness, this work has not been focused on recommendation. Here, we consider user perspectives of fairness-aware recommender systems and techniques for enhancing their transparency. We describe the results of an exploratory interview study that investigates user perceptions of fairness, recommender systems, and fairness-aware objectives. We propose three features – informed by the needs of our participants – that could improve user understanding of and trust in fairness-aware recommender systems. 
    more » « less
  5. Providing user-understandable explanations to justify recommendations could help users better understand the recommended items, increase the system’s ease of use, and gain users’ trust. A typical approach to realize it is natural language generation. However, previous works mostly adopt recurrent neural networks to meet the ends, leaving the potentially more effective pre-trained Transformer models under-explored. In fact, user and item IDs, as important identifiers in recommender systems, are inherently in different semantic space as words that pre-trained models were already trained on. Thus, how to effectively fuse IDs into such models becomes a critical issue. Inspired by recent advancement in prompt learning, we come up with two solutions: find alternative words to represent IDs (called discrete prompt learning) and directly input ID vectors to a pre-trained model (termed continuous prompt learning). In the latter case, ID vectors are randomly initialized but the model is trained in advance on large corpora, so they are actually in different learning stages. To bridge the gap, we further propose two training strategies: sequential tuning and recommendation as regularization. Extensive experiments show that our continuous prompt learning approach equipped with the training strategies consistently outperforms strong baselines on three datasets of explainable recommendation. 
    more » « less