skip to main content


Title: Fair Algorithms for Learning in Allocation Problems
Settings such as lending and policing can be modeled by a centralized agent allocating a scarce resource (e.g. loans or police officers) amongst several groups, in order to maximize some objective (e.g. loans given that are repaid, or criminals that are apprehended). Often in such problems fairness is also a concern. One natural notion of fairness, based on general principles of equality of opportunity, asks that conditional on an individual being a candidate for the resource in question, the probability of actually receiving it is approximately independent of the individual’s group. For example, in lending this would mean that equally creditworthy individuals in different racial groups have roughly equal chances of receiving a loan. In policing it would mean that two individuals committing the same crime in different districts would have roughly equal chances of being arrested. In this paper, we formalize this general notion of fairness for allocation problems and investigate its algorithmic consequences. Our main technical results include an efficient learning algorithm that converges to an optimal fair allocation even when the allocator does not know the frequency of candidates (i.e. creditworthy individuals or criminals) in each group. This algorithm operates in a censored feedback model in which only the number of candidates who received the resource in a given allocation can be observed, rather than the true number of candidates in each group. This models the fact that we do not learn the creditworthiness of individuals we do not give loans to and do not learn about crimes committed if the police presence in a district is low.  more » « less
Award ID(s):
1763307
NSF-PAR ID:
10100408
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
ACM FAT* 2019
Page Range / eLocation ID:
170 to 179
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Jackson, Jonathan (Ed.)
    Explanations for police misconduct often center on a narrow notion of “problem officers,” the proverbial “bad apples.” Such an individualistic approach not only ignores the larger systemic problems of policing but also takes for granted the group-based nature of police work. Nearly all of police work is group-based and officers’ formal and informal networks can impact behavior, including misconduct. In extreme cases, groups of officers (what we refer to as, “crews”) have even been observed to coordinate their abusive and even criminal behaviors. This study adopts a social network and machine learning approach to empirically investigate the presence and impact of officer crews engaging in alleged misconduct in a major U.S. city: Chicago, IL. Using data on Chicago police officers between 1971 and 2018, we identify potential crews and analyze their impact on alleged misconduct and violence. Results detected approximately 160 possible crews, comprised of less than 4% of all Chicago police officers. Officers in these crews were involved in an outsized amount of alleged and actual misconduct, accounting for approximately 25% of all use of force complaints, city payouts for civil and criminal litigations, and police-involved shootings. The detected crews also contributed to racial disparities in arrests and civilian complaints, generating nearly 18% of all complaints filed by Black Chicagoans and 14% of complaints filed by Hispanic Chicagoans. 
    more » « less
  2. We consider the allocation of scarce societal resources, where a central authority decides which individuals receive which resources under capacity or budget constraints. Several algorithmic fairness criteria have been proposed to guide these procedures, each quantifying a notion of local justice to ensure the allocation is aligned with the principles of the local institution making the allocation. For example, the efficient allocation maximizes overall social welfare, whereas the leximin assignment seeks to help the “neediest first.” Although the “price of fairness” (PoF) of leximin has been studied in prior work, we expand on these results by exploiting the structure inherent in real-world scenarios to provide tighter bounds. We further propose a novel criterion – which we term LoINC (leximin over individually normalized costs) – that maximizes a different but commonly used notion of local justice: prioritizing those benefiting the most from receiving the resources. We derive analogous PoF bounds for LoINC, showing that the price of LoINC is typically much lower than that of leximin. We provide extensive experimental results using both synthetic data and in a real-world setting considering the efficacy of different homelessness interventions. These results show that the empirical PoF tends to be substantially lower than worst-case bounds would imply and allow us to characterize situations where the price of LoINC fairness can be high. 
    more » « less
  3. We propose a novel formulation of group fairness with biased feedback in the contextual multi-armed bandit (CMAB) setting. In the CMAB setting, a sequential decision maker must, at each time step, choose an arm to pull from a finite set of arms after observing some context for each of the potential arm pulls. In our model, arms are partitioned into two or more sensitive groups based on some protected feature(s) (e.g., age, race, or socio-economic status). Initial rewards received from pulling an arm may be distorted due to some unknown societal or measurement bias. We assume that in reality these groups are equal despite the biased feedback received by the agent. To alleviate this, we learn a societal bias term which can be used to both find the source of bias and to potentially fix the problem outside of the algorithm. We provide a novel algorithm that can accommodate this notion of fairness for an arbitrary number of groups, and provide a theoretical bound on the regret for our algorithm. We validate our algorithm using synthetic data and two real-world datasets for intervention settings wherein we want to allocate resources fairly across groups. 
    more » « less
  4. Predictive policing, the practice of using of algorithmic systems to forecast crime, is heralded by police departments as the new frontier of crime analysis. At the same time, it is opposed by civil rights groups, academics, and media outlets for being ‘biased’ and therefore discriminatory against communities of color. This paper argues that the prevailing focus on racial bias has overshadowed two normative factors that are essential to a full assessment of the moral permissibility of predictive policing: fairness in the social distribution of the benefits and burdens of policing as well as the distinctive role of consent in determining fair distribution. When these normative factors are given their due attention, several requirements emerge for the fair implementation of predictive policing. Among these requirements are that police departments inform and solicit buy-in from affected communities about strategic decision-making and that departments favor non-enforcement-oriented interventions. 
    more » « less
  5. We consider the problem of dividing limited resources to individuals arriving over T rounds. Each round has a random number of individuals arrive, and individuals can be characterized by their type (i.e., preferences over the different resources). A standard notion of fairness in this setting is that an allocation simultaneously satisfy envy-freeness and efficiency. The former is an individual guarantee, requiring that each agent prefers the agent’s own allocation over the allocation of any other; in contrast, efficiency is a global property, requiring that the allocations clear the available resources. For divisible resources, when the number of individuals of each type are known up front, the desiderata are simultaneously achievable for a large class of utility functions. However, in an online setting when the number of individuals of each type are only revealed round by round, no policy can guarantee these desiderata simultaneously, and hence, the best one can do is to try and allocate so as to approximately satisfy the two properties. We show that, in the online setting, the two desired properties (envy-freeness and efficiency) are in direct contention in that any algorithm achieving additive counterfactual envy-freeness up to a factor of L T necessarily suffers an efficiency loss of at least [Formula: see text]. We complement this uncertainty principle with a simple algorithm, Guarded-Hope, which allocates resources based on an adaptive threshold policy and is able to achieve any fairness–efficiency point on this frontier. Our results provide guarantees for fair online resource allocation with high probability for multiple resource and multiple type settings. In simulation results, our algorithm provides allocations close to the optimal fair solution in hindsight, motivating its use in practical applications as the algorithm is able to adapt to any desired fairness efficiency trade-off. Funding: This work was supported by the National Science Foundation [Grants ECCS-1847393, DMS-1839346, CCF-1948256, and CNS-1955997] and the Army Research Laboratory [Grant W911NF-17-1-0094]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/opre.2022.2397 . 
    more » « less