skip to main content


Search for: All records

Award ID contains: 2007955

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Nudging is a behavioral strategy aimed at influencing people’s thoughts and actions. Nudging techniques can be found in many situations in our daily lives, and these nudging techniques can targeted at human fast and unconscious thinking, e.g., by using images to generate fear or the more careful and effortful slow thinking, e.g., by releasing information that makes us reflect on our choices. In this paper, we propose and discuss a value-based AI-human collaborative framework where AI systems nudge humans by proposing decision recommendations. Three different nudging modalities, based on when recommendations are presented to the human, are intended to stimulate human fast thinking, slow thinking, or meta-cognition. Values that are relevant to a specific decision scenario are used to decide when and how to use each of these nudging modalities. Examples of values are decision quality, speed, human upskilling and learning, human agency, and privacy. Several values can be present at the same time, and their priorities can vary over time. The framework treats values as parameters to be instantiated in a specific decision environment. 
    more » « less
    Free, publicly-accessible full text available August 1, 2024
  2. In representative democracies, regular election cycles are supposed to prevent misbehavior by elected officials, hold them accountable, and subject them to the “will of the people." Pandering, or dishonest preference reporting by candidates campaigning for election, undermines this democratic idea. Much of the work on Computational Social Choice to date has investigated strategic actions in only a single election. We introduce a novel formal model of pandering and examine the resilience of two voting systems, Representative Democracy (RD) and Flexible Representative Democracy (FRD), to pandering within a single election and across multiple rounds of elections. For both voting systems, our analysis centers on the types of strategies candidates employ and how voters update their views of candidates based on how the candidates have pandered in the past. We provide theoretical results on the complexity of pandering in our setting for a single election, formulate our problem for multiple cycles as a Markov Decision Process, and use reinforcement learning to study the effects of pandering by single candidates and groups of candidates over many rounds. 
    more » « less
    Free, publicly-accessible full text available August 1, 2024
  3. Sometimes agents care not only about the outcomes of collective decisions but also about how decisions are made. Both the outcome and the procedure affect whether agents see a decision as legitimate or acceptable. We focus on incorporating agents’ preferences over decision-making processes into the process itself. Taking whole decisions, including decision rules and outcomes, to be the object of agent preferences rather than only decision outcomes, we (1) identify natural, plausible preference structures and key properties, (2) develop general mechanisms for aggregating these preferences to maximize the acceptability of decisions, and (3) analyze the performance of our acceptance-maximizing mechanisms. We apply our general approach to the setting of dichotomous choice, and compare the worst-case rates of acceptance achievable among populations of agents of different types. We include the special case of rule selection, or amendment, and show that amendment procedures proposed by Abramowitz et al. [2] achieve universal acceptance with certain agent types. 
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  4. When it comes to collective decisions, we have to deal with the fact that agents have preferences over both decision outcomes and how decisions are made. If we create rules for aggregating preferences over rules, and rules for preferences over rules for preferences over rules, and so on, it would appear that we run into infinite regress with preferences and rules at successively higher “levels.” The starting point of our analysis is the claim that such regress should not be a problem in practice, as any such preferences will necessarily be bounded in complexity and structured coherently in accordance with some (possibly latent) normative principles. Our core contributions are (1) the identification of simple, intuitive preference structures at low levels that can be generalized to form the building blocks of preferences at higher levels, and (2) the de- velopment of algorithms for maximizing the number of agents with such low-level preferences who will “accept” a decision. We analyze algorithms for acceptance maximization in two different domains: asymmetric dichotomous choice and constitutional amendment. In both settings we study the worst-case performance of the appro- priate algorithms, and reveal circumstances under which universal acceptance is possible. In particular, we show that constitutional amendment procedures proposed recently by Abramowitz et al. [2] can achieve universal acceptance. 
    more » « less
    Free, publicly-accessible full text available June 1, 2024
  5. Free, publicly-accessible full text available June 1, 2024
  6. Many real-life scenarios require humans to make difficult trade-offs: do we always follow all the traffic rules or do we violate the speed limit in an emergency? In general, how should we account for and balance the ethical values, safety recommendations, and societal norms, when we are trying to achieve a certain objective? To enable effective AI-human collaboration, we must equip AI agents with a model of how humans make such trade-offs in environments where there is not only a goal to be reached, but there are also ethical constraints to be considered and to possibly align with. These ethical constraints could be both deontological rules on actions that should not be performed, or also consequentialist policies that recommend avoiding reaching certain states of the world. Our purpose is to build AI agents that can mimic human behavior in these ethically constrained decision environments, with a long term research goal to use AI to help humans in making better moral judgments and actions. To this end, we propose a computational approach where competing objectives and ethical constraints are orchestrated through a method that leverages a cognitive model of human decision making, called multi-alternative decision field theory (MDFT). Using MDFT, we build an orchestrator, called MDFT-Orchestrator (MDFT-O), that is both general and flexible. We also show experimentally that MDFT-O both generates better decisions than using a heuristic that takes a weighted average of competing policies (WA-O), but also performs better in terms of mimicking human decisions as collected through Amazon Mechanical Turk (AMT). Our methodology is therefore able to faithfully model human decision in ethically constrained decision environments. 
    more » « less
  7. We propose a novel formulation of group fairness with biased feedback in the contextual multi-armed bandit (CMAB) setting. In the CMAB setting, a sequential decision maker must, at each time step, choose an arm to pull from a finite set of arms after observing some context for each of the potential arm pulls. In our model, arms are partitioned into two or more sensitive groups based on some protected feature(s) (e.g., age, race, or socio-economic status). Initial rewards received from pulling an arm may be distorted due to some unknown societal or measurement bias. We assume that in reality these groups are equal despite the biased feedback received by the agent. To alleviate this, we learn a societal bias term which can be used to both find the source of bias and to potentially fix the problem outside of the algorithm. We provide a novel algorithm that can accommodate this notion of fairness for an arbitrary number of groups, and provide a theoretical bound on the regret for our algorithm. We validate our algorithm using synthetic data and two real-world datasets for intervention settings wherein we want to allocate resources fairly across groups. 
    more » « less
  8. Aggregating signals from a collection of noisy sources is a fundamental problem in many domains including crowd-sourcing, multi-agent planning, sensor networks, signal processing, voting, ensemble learning, and federated learning. The core question is how to aggregate signals from multiple sources (e.g. experts) in order to reveal an underlying ground truth. While a full answer depends on the type of signal, correlation of signals, and desired output, a problem common to all of these applications is that of differentiating sources based on their quality and weighting them accordingly. It is often assumed that this differentiation and aggregation is done by a single, accurate central mechanism or agent (e.g. judge). We complicate this model in two ways. First, we investigate the setting with both a single judge, and one with multiple judges. Second, given this multi-agent interaction of judges, we investigate various constraints on the judges’ reporting space. We build on known results for the optimal weighting of experts and prove that an ensemble of sub-optimal mechanisms can perform optimally under certain conditions. We then show empirically that the ensemble approximates the performance of the optimal mechanism under a broader range of conditions. 
    more » « less
  9. We analyze the run-time complexity of computing allocations that are both fair and maximize the utilitarian social welfare, defined as the sum of agents’ utilities. We focus on two tractable fairness concepts: envy-freeness up to one item (EF1) and proportionality up to one item (PROP1). We consider two computational problems: (1) Among the utilitarian-maximal allocations, decide whether there exists one that is also fair; (2) among the fair allocations, compute one that maximizes the utilitarian welfare. We show that both problems are strongly NP-hard when the number of agents is variable, and remain NP-hard for a fixed number of agents greater than two. For the special case of two agents, we find that problem (1) is polynomial-time solvable, while problem (2) remains NP-hard. Finally, with a fixed number of agents, we design pseudopolynomial-time algorithms for both problems. We extend our results to the stronger fairness notions envy-freeness up to any item (EFx) and proportionality up to any item (PROPx). 
    more » « less
  10. Current AI systems lack several important human capabilities, such as adaptability, generalizability, selfcontrol, consistency, common sense, and causal reasoning. We believe that existing cognitive theories of human decision making, such as the thinking fast and slow theory, can provide insights on how to advance AI systems towards some of these capabilities. In this paper, we propose a general architecture that is based on fast/slow solvers and a metacognitive component. We then present experimental results on the behavior of an instance of this architecture, for AI systems that make decisions about navigating in a constrained environment. We show how combining the fast and slow decision modalities, which can be implemented by learning and reasoning components respectively, allows the system to evolve over time and gradually pass from slow to fast thinking with enough experience, and that this greatly helps in decision quality, resource consumption, and efficiency. 
    more » « less