Although machine learning (ML) algorithms are widely used to make decisions about individuals in various domains, concerns have arisen that (1) these algorithms are vulnerable to strategic manipulation and "gaming the algorithm"; and (2) ML decisions may exhibit bias against certain social groups. Existing works have largely examined these as two separate issues, e.g., by focusing on building ML algorithms robust to strategic manipulation, or on training a fair ML algorithm. In this study, we set out to understand the impact they each have on the other, and examine how to characterize fair policies in the presence of strategic behavior. The strategic interaction between a decision maker and individuals (as decision takers) is modeled as a two-stage (Stackelberg) game; when designing an algorithm, the former anticipates the latter may manipulate their features in order to receive more favorable decisions. We analytically characterize the equilibrium strategies of both, and examine how the algorithms and their resulting fairness properties are affected when the decision maker is strategic (anticipates manipulation), as well as the impact of fairness interventions on equilibrium strategies. In particular, we identify conditions under which anticipation of strategic behavior may mitigate/exacerbate unfairness, and conditions under which fairness interventions can serve as (dis)incentives for strategic manipulation.
more »
« less
The Paradox of Algorithms and Blame on Public Decision-makers
Abstract Public decision-makers incorporate algorithm decision aids, often developed by private businesses, into the policy process, in part, as a method for justifying difficult decisions. Ethicists have worried that over-trust in algorithm advice and concerns about punishment if departing from an algorithm’s recommendation will result in over-reliance and harm democratic accountability. We test these concerns in a set of two pre-registered survey experiments in the judicial context conducted on three representative U.S. samples. The results show no support for the hypothesized blame dynamics, regardless of whether the judge agrees or disagrees with the algorithm. Algorithms, moreover, do not have a significant impact relative to other sources of advice. Respondents who are generally more trusting of elites assign greater blame to the decision-maker when they disagree with the algorithm, and they assign more blame when they think the decision-maker is abdicating their responsibility by agreeing with an algorithm.
more »
« less
- Award ID(s):
- 2131504
- PAR ID:
- 10545286
- Publisher / Repository:
- Cambridge University Press
- Date Published:
- Journal Name:
- Business and Politics
- Volume:
- 26
- Issue:
- 2
- ISSN:
- 1469-3569
- Page Range / eLocation ID:
- 200 to 217
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Due to their unique persuasive power, language-capable robots must be able to both act in line with human moral norms and clearly and appropriately communicate those norms. These requirements are complicated by the possibility that humans may ascribe blame differently to humans and robots. In this work, we explore how robots should communicate in moral advising scenarios, in which the norms they are expected to follow (in a moral dilemma scenario) may be different from those their advisees are expected to follow. Our results suggest that, in fact, both humans and robots are judged more positively when they provide the advice that favors the common good over an individual’s life. These results raise critical new questions regarding people’s moral responses to robots and the design of autonomous moral agents.more » « less
-
Abstract This paper develops the concept of flood problem framing to understand decision-makers’ priorities in flood risk management in the Los Angeles Metropolitan Region in California (LA Metro). Problem frames shape an individual’s preferences for particular management strategies and their future behaviors. While flooding is a complex, multifaceted problem, with multiple causes and multiple impacts, a decision-maker is most likely to manage only those dimensions of flooding about which they are aware or concerned. To evaluate flood decision-makers’ primary concerns related to flood exposure, vulnerability, and management in the LA Metro, we draw on focus groups with flood control districts, city planners, nonprofit organizations, and other flood-related decision-makers. We identify numerous concerns, including concerns about specific types of floods (e.g., fluvial vs pluvial) and impacts to diverse infrastructure and communities. Our analyses demonstrate that flood concerns aggregate into three problem frames: one concerned with large fluvial floods exacerbated by climate change and their housing, economic, and infrastructure impacts; one concerned with pluvial nuisance flooding, pollution, and historic underinvestment in communities; and one concerned with coastal and fluvial flooding’s ecosystem impacts. While each individual typically articulated concerns that overlapped with only one problem frame, each problem frame was discussed by numerous organization types, suggesting low barriers to cross-organizational coordination in flood planning and response. This paper also advances our understanding of flood risk perception in a region that does not face frequent large floods. Significance StatementThis paper investigates the primary concerns that planners, flood managers, and other decision-makers have about flooding in Southern California. This is important because the way that decision-makers understand flooding shapes the way that they will plan for and respond to flood events. We find that some decision-makers are primarily concerned with large floods affecting large swaths of infrastructure and housing; others are concerned with frequent, small floods that mobilize pollution in low-income areas; and others are concerned with protecting coastal ecosystems during sea level rise. Our results also highlight key priorities for research and practice, including the need for flexible and accessible flood data and education about how to evacuate.more » « less
-
This paper studies algorithmic decision-making under human's strategic behavior, where a decision maker uses an algorithm to make decisions about human agents, and the latter with information about the algorithm may exert effort strategically and improve to receive favorable decisions. Unlike prior works that assume agents benefit from their efforts immediately, we consider realistic scenarios where the impacts of these efforts are persistent and agents benefit from efforts by making improvements gradually. We first develop a dynamic model to characterize persistent improvements and based on this construct a Stackelberg game to model the interplay between agents and the decision-maker. We analytically characterize the equilibrium strategies and identify conditions under which agents have incentives to improve. With the dynamics, we then study how the decision-maker can design an optimal policy to incentivize the largest improvements inside the agent population. We also extend the model to settings where 1) agents may be dishonest and game the algorithm into making favorable but erroneous decisions; 2) honest efforts are forgettable and not sufficient to guarantee persistent improvements. With the extended models, we further examine conditions under which agents prefer honest efforts over dishonest behavior and the impacts of forgettable efforts.more » « less
-
Abstract Shifts to hybrid work prompted by the COVID-19 pandemic have the potential to substantially impact social relationships at work. Hybrid employees rely heavily on digital collaboration technologies to communicate and share information. Therefore, employees’ perceptions of the technologies are critical in shaping organizational networks. However, the dyadic-level misalignment in these perceptions may lead to relationship dissolution. To explore the social network consequences of hybrid work, we conducted a two-wave survey in a department of an industrial manufacturing firm (N = 169). Our results show that advice seekers were less likely to maintain their advice-seeking ties when they had a mismatch in ease-of-use perceptions of technology with their advisors. The effect was more substantial when advice seekers spent more time working remotely. The study provides empirical insights into how congruence in employees’ perceptions of organizational communication technologies affects how they maintain advice networks during hybrid work.more » « less
An official website of the United States government

