We propose a novel combinatorial inference framework to conduct general uncertainty quantification in ranking problems. We consider the widely adopted Bradley-Terry-Luce (BTL) model, where each item is assigned a positive preference score that determines the Bernoulli distributions of pairwise comparisons’ outcomes. Our proposed method aims to infer general ranking properties of the BTL model. The general ranking properties include the “local” properties such as if an item is preferred over another and the “global” properties such as if an item is among the top K-ranked items. We further generalize our inferential framework to multiple testing problems where we control the false discovery rate (FDR) and apply the method to infer the top-K ranked items. We also derive the information-theoretic lower bound to justify the minimax optimality of the proposed method. We conduct extensive numerical studies using both synthetic and real data sets to back up our theory.
Overt visual attention and value computation in complex risky choice
Traditional models of decision making under uncertainty explain human behavior in simple situations with a minimal set of alternatives and attributes. Some of them, such as prospect theory, have been proven successful and robust in such simple situations. Yet, less is known about the preference formation during decision making in more complex cases. Furthermore, it is generally accepted that attention plays a role in the decision process but most theories make simplifying assumptions about where attention is deployed. In this study, we replace these assumptions by measuring where humans deploy overt attention, i.e. where they fixate. To assess the influence of task complexity, participants perform two tasks. The simpler of the two requires participants to choose between two alternatives with two attributes each (four items to consider). The more complex one requires a choice between four alternatives with four attributes each (16 items to consider). We then compare a large set of model classes, of different levels of complexity, by considering the dynamic interactions between uncertainty, attention and pairwise comparisons between attribute values. The task of all models is to predict what choices humans make, using the sequence of observed eye movements for each participant as input to the model. more »
- Award ID(s):
- 1835202
- Publication Date:
- NSF-PAR ID:
- 10394582
- Journal Name:
- bioRxiv
- Volume:
- doi.org/10.1101/2020.12.08.416313
- Page Range or eLocation-ID:
- 1-49
- ISSN:
- 2692-8205
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
A prerequisite for social coordination is bidirectional communication between teammates, each playing two roles simultaneously: as receptive listeners and expressive speakers. For robots working with humans in complex situations with multiple goals that differ in importance, failure to fulfill the expectation of either role could undermine group performance due to misalignment of values between humans and robots. Specifically, a robot needs to serve as an effective listener to infer human users’ intents from instructions and feedback and as an expressive speaker to explain its decision processes to users. Here, we investigate how to foster effective bidirectional human-robot communications in the context of value alignment—collaborative robots and users form an aligned understanding of the importance of possible task goals. We propose an explainable artificial intelligence (XAI) system in which a group of robots predicts users’ values by taking in situ feedback into consideration while communicating their decision processes to users through explanations. To learn from human feedback, our XAI system integrates a cooperative communication model for inferring human values associated with multiple desirable goals. To be interpretable to humans, the system simulates human mental dynamics and predicts optimal explanations using graphical models. We conducted psychological experiments to examine the core componentsmore »
-
Observational learning models seek to understand how distributed agents learn from observing the actions of others. In the basic model, agents seek to choose between two alternatives, where the underlying value of each alternative is the same for each agent. Agents do not know this value but only observe a noisy signal of the value and make their decision based on this signal and observations of other agents’ actions. Here, instead we consider a scenario in which the choices faced by an agent exhibit a negative externality so that value of a choice may decrease depending on the history of other agents selecting that choice. We study the learning behavior of Bayesian agents with such an externality and show that this can lead to very different outcomes compared to models without such an externality.
-
AI systems are often used to make or contribute to important decisions in a growing range of applications, including criminal justice, hiring, and medicine. Since these decisions impact human lives, it is important that the AI systems act in ways which align with human values. Techniques for preference modeling and social choice help researchers learn and aggregate peoples' preferences, which are used to guide AI behavior; thus, it is imperative that these learned preferences are accurate. These techniques often assume that people are willing to express strict preferences over alternatives; which is not true in practice. People are often indecisive, and especially so when their decision has moral implications. The philosophy and psychology literature shows that indecision is a measurable and nuanced behavior---and that there are several different reasons people are indecisive. This complicates the task of both learning and aggregating preferences, since most of the relevant literature makes restrictive assumptions on the meaning of indecision. We begin to close this gap by formalizing several mathematical indecision models based on theories from philosophy, psychology, and economics; these models can be used to describe (indecisive) agent decisions, both when they are allowed to express indecision and when they are not. Wemore »
-
There are many competing definitions of what statistical properties make a machine learning model fair. Unfortunately, research has shown that some key properties are mutually exclusive. Realistic models are thus necessarily imperfect, choosing one side of a trade-off or the other. To gauge perceptions of the fairness of such realistic, imperfect models, we conducted a between-subjects experiment with 502 Mechanical Turk workers. Each participant compared two models for deciding whether to grant bail to criminal defendants. The first model equalized one potentially desirable model property, with the other property varying across racial groups. The second model did the opposite. We tested pairwise trade-offs between the following four properties: accuracy; false positive rate; outcomes; and the consideration of race. We also varied which racial group the model disadvantaged. We observed a preference among participants for equalizing the false positive rate between groups over equalizing accuracy. Nonetheless, no preferences were overwhelming, and both sides of each trade-off we tested were strongly preferred by a non-trivial fraction of participants. We observed nuanced distinctions between participants considering a model "unbiased" and considering it "fair." Furthermore, even when a model within a trade-off pair was seen as fair and unbiased by a majority of participants,more »