skip to main content


Title: Recommendations as Treatments
In recent years, a new line of research has taken an interventional view of recommender systems, where recommendations are viewed as actions that the system takes to have a desired effect. This interventional view has led to the development of counterfactual inference techniques for evaluating and optimizing recommendation policies. This article explains how these techniques enable unbiased offline evaluation and learning despite biased data, and how they can inform considerations of fairness and equity in recommender systems.  more » « less
Award ID(s):
1901168 2008139
NSF-PAR ID:
10379588
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
AI Magazine
Volume:
42
Issue:
3
ISSN:
0738-4602
Page Range / eLocation ID:
19 to 30
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The study of complex networks is a significant development in modern science, and has enriched the social sciences, biology, physics, and computer science. Models and algorithms for such networks are pervasive in our society, and impact human behavior via social networks, search engines, and recommender systems, to name a few. A widely used algorithmic technique for modeling such complex networks is to construct a low-dimensional Euclidean embedding of the vertices of the network, where proximity of vertices is interpreted as the likelihood of an edge. Contrary to the common view, we argue that such graph embeddings do not capture salient properties of complex networks. The two properties we focus on are low degree and large clustering coefficients, which have been widely established to be empirically true for real-world networks. We mathematically prove that any embedding (that uses dot products to measure similarity) that can successfully create these two properties must have a rank that is nearly linear in the number of vertices. Among other implications, this establishes that popular embedding techniques such as singular value decomposition and node2vec fail to capture significant structural aspects of real-world complex networks. Furthermore, we empirically study a number of different embedding techniques based on dot product, and show that they all fail to capture the triangle structure.

     
    more » « less
  2. null (Ed.)
    Though recommender systems are defined by personalization, recent work has shown the importance of additional, beyond-accuracy objectives, such as fairness. Because users often expect their recommendations to be purely personalized, these new algorithmic objectives must be communicated transparently in a fairness-aware recommender system. While explanation has a long history in recommender systems research, there has been little work that attempts to explain systems that use a fairness objective. Even though the previous work in other branches of AI has explored the use of explanations as a tool to increase fairness, this work has not been focused on recommendation. Here, we consider user perspectives of fairness-aware recommender systems and techniques for enhancing their transparency. We describe the results of an exploratory interview study that investigates user perceptions of fairness, recommender systems, and fairness-aware objectives. We propose three features – informed by the needs of our participants – that could improve user understanding of and trust in fairness-aware recommender systems. 
    more » « less
  3. As one of the most pervasive applications of machine learning, recommender systems are playing an important role on assisting human decision making. The satisfaction of users and the interests of platforms are closely related to the quality of the generated recommendation results. However, as a highly data-driven system, recommender system could be affected by data or algorithmic bias and thus generate unfair results, which could weaken the reliance of the systems. As a result, it is crucial to address the potential unfairness problems in recommendation settings. Recently, there has been growing attention on fairness considerations in recommender systems with more and more literature on approaches to promote fairness in recommendation. However, the studies are rather fragmented and lack a systematic organization, thus making it difficult to penetrate for new researchers to the domain. This motivates us to provide a systematic survey of existing works on fairness in recommendation. This survey focuses on the foundations for fairness in recommendation literature. It first presents a brief introduction about fairness in basic machine learning tasks such as classification and ranking in order to provide a general overview of fairness research, as well as introduce the more complex situations and challenges that need to be considered when studying fairness in recommender systems. After that, the survey will introduce fairness in recommendation with a focus on the taxonomies of current fairness definitions, the typical techniques for improving fairness, as well as the datasets for fairness studies in recommendation. The survey also talks about the challenges and opportunities in fairness research with the hope of promoting the fair recommendation research area and beyond. 
    more » « less
  4. Surveys often are used in educational research to gather information about respondents without considering the effect of survey questions on survey-takers themselves. Does the very act of taking a survey influence perspectives, mindsets, and even behaviors? Does a survey itself effectuate attitudinal change? Such effects of surveys, and implications for survey data interpretation, warrant close attention. There is a long tradition of research on surveys as behavioral interventions within political science and social psychology, but limited attention has been given to the topic in engineering education, and higher education more broadly. Recently the engineering education community has started to examine the potential effects of assessment techniques (including surveys) as catalysts for reflection. In March 2014, the Consortium to Promote Reflection in Engineering Education (CPREE), representing a two-year collaboration amongst 12 campuses, was established to promote “a broader understanding and use of reflective techniques in engineering education.”1 CPREE’s formation suggests a growing recognition of reflection as an important and underemphasized aspect of an engineer’s education. CPREE defines reflection as “exploring the meaning of experiences and the consequences of the meanings for future action” and emphasizes the importance of taking action as a result of ascribing meaning to experiences.1 Surveys may be one of several tools that may create opportunities for reflection; others include “exam wrappers” and “homework wrappers” that encourage students to explore how they feel about an assignment or task as part of making meaning of it2,3 (and stimulating the kind of reflection that can lead to action). The current study bridges these two frameworks of behavioral interventions and reflection to consider the “extra-ordinate” dimensions of survey-taking and explores how survey participation may (1) support students’ reflection on past experiences, meaningmaking of these experiences, and insights that “inform [their] path going forward,”1 and (2) be associated with students’ subsequent behaviors. We first review a broader literature on the interventional effects on surveys in political studies and social psychology, after which we present the results obtained from including an optional reflection question at the end of an engineering education survey. We conclude that educators would benefit from considering the range of potential impacts that responding to questions may have on students’ thoughts and actions, rather than treating surveys as neutral data collection devices when designing their research. 
    more » « less
  5. Artificial Intelligence (AI) systems for mental healthcare (MHCare) have been ever-growing after realizing the importance of early interventions for patients with chronic mental health (MH) conditions. Social media (SocMedia) emerged as the go-to platform for supporting patients seeking MHCare. The creation of peer-support groups without social stigma has resulted in patients transitioning from clinical settings to SocMedia supported interactions for quick help. Researchers started exploring SocMedia content in search of cues that showcase correlation or causation between different MH conditions to design better interventional strategies. User-level Classification-based AI systems were designed to leverage diverse SocMedia data from various MH conditions, to predict MH conditions. Subsequently, researchers created classification schemes to measure the severity of each MH condition. Such ad-hoc schemes, engineered features, and models not only require a large amount of data but fail to allow clinically acceptable and explainable reasoning over the outcomes. To improve Neural-AI for MHCare, infusion of clinical symbolic knowledge that clinicans use in decision making is required. An impactful use case of Neural-AI systems in MH is conversational systems. These systems require coordination between classification and generation to facilitate humanistic conversation in conversational agents (CA). Current CAs with deep language models lack factual correctness, medical relevance, and safety in their generations, which intertwine with unexplainable statistical classification techniques. This lecture-style tutorial will demonstrate our investigations into Neuro-symbolic methods of infusing clinical knowledge to improve the outcomes of Neural-AI systems to improve interventions for MHCare:(a) We will discuss the use of diverse clinical knowledge in creating specialized datasets to train Neural-AI systems effectively. (b) Patients with cardiovascular disease express MH symptoms differently based on gender differences. We will show that knowledge-infused Neural-AI systems can identify gender-specific MH symptoms in such patients. (c) We will describe strategies for infusing clinical process knowledge as heuristics and constraints to improve language models in generating relevant questions and responses. 
    more » « less