In mechanism design, the firm has an advantage over its customers in its knowledge of the state of the system, which can affect the utilities of all players. This poses the question: how can the firm utilize that information (and not additional financial incentives) to persuade customers to take actions that lead to higher revenue (or other firm utility)? When the firm is constrained to "cheap talk," and cannot credibly commit to a manner of signaling, the firm cannot change customer behavior in a meaningful way. Instead, we allow firm to commit to how they will signal in advance. Customersmore »
Evaluating Stochastic Rankings with Expected Exposure
We introduce the concept of \emph{expected exposure} as the average attention ranked items receive from users over repeated samples of the same query. Furthermore, we advocate for the adoption of the principle of equal expected exposure: given a fixed information need, no item should receive more or less expected exposure than any other item of the same relevance grade. We argue that this principle is desirable for many retrieval objectives and scenarios, including topical diversity and fair ranking. Leveraging user models from existing retrieval metrics, we propose a general evaluation methodology based on expected exposure and draw connections to related metrics in information retrieval evaluation. Importantly, this methodology relaxes classic information retrieval assumptions, allowing a system, in response to a query, to produce a \emph{distribution over rankings} instead of a single fixed ranking. We study the behavior of the expected exposure metric and stochastic rankers across a variety of information access conditions, including \emph{ad hoc} retrieval and recommendation. We believe that measuring and optimizing expected exposure metrics using randomization opens a new area for retrieval algorithm development and progress.
- Award ID(s):
- 1751278
- Publication Date:
- NSF-PAR ID:
- 10199451
- Journal Name:
- Proceedings of the 29th ACM International Conference on Information and Knowledge Management
- Page Range or eLocation-ID:
- 275 to 284
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Table retrieval is the task of extracting the most relevant tables to answer a user's query. Table retrieval is an important task because many domains have tables that contain useful information in a structured form. Given a user's query, the goal is to obtain a relevance ranking for query-table pairs, such that higher ranked tables should be more relevant to the query. In this paper, we present a context-aware table retrieval method that is based on a novel embedding for attribute tokens. We find that differentiated types of contexts are useful in building word embeddings. We also find that includingmore »
-
Users often fail to formulate their complex information needs in a single query. As a consequence, they need to scan multiple result pages and/or reformulate their queries, which is a frustrating experience. Alternatively, systems can improve user satisfaction by proactively asking questions from the users to clarify their information needs. Asking clarifying questions is especially important in information-seeking conversational systems, since they can only return a limited number (often only one) of results. In this paper, we formulate the task of asking clarifying questions in open-domain information retrieval. We propose an offline evaluation methodology for the task. In this research,more »
-
Ranking evaluation metrics play an important role in information retrieval, providing optimization objectives during development and means of assessment of deployed performance. Recently, fairness of rankings has been recognized as crucial, especially as automated systems are increasingly used for high impact decisions. While numerous fairness metrics have been proposed, a comparative analysis to understand their interrelationships is lacking. Even for fundamental statistical parity metrics which measure group advantage, it remains unclear whether metrics measure the same phenomena, or when one metric may produce different results than another. To address these open questions, we formulate a conceptual framework for analytical comparisonmore »
-
The learning-to-rank problem aims at ranking items to maximize exposure of those most relevant to a user query. A desirable property of such ranking systems is to guarantee some notion of fairness among specified item groups. While fairness has recently been considered in the context of learning-to-rank systems, current methods cannot provide guarantees on the fairness of the predicted rankings. This paper addresses this gap and introduces Smart Predict and Optimize for Fair Ranking (SPOFR), an integrated optimization and learning framework for fairness-constrained learning to rank. The end-to-end SPOFR framework includes a constrained optimization sub-model and produces ranking policies thatmore »