skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM to 12:00 AM ET on Tuesday, March 25 due to maintenance. We apologize for the inconvenience.


Title: Fair and Efficient Allocation of Scarce Resources Based on Predicted Outcomes: Implications for Homeless Service Delivery
Artificial intelligence, machine learning, and algorithmic techniques in general, provide two crucial abilities with the potential to improve decision-making in the context of allocation of scarce societal resources. They have the ability to flexibly and accurately model treatment response at the individual level, potentially allowing us to better match available resources to individuals. In addition, they have the ability to reason simultaneously about the effects of matching sets of scarce resources to populations of individuals. In this work, we leverage these abilities to study algorithmic allocation of scarce societal resources in the context of homelessness. In communities throughout the United States, there is constant demand for an array of homeless services intended to address different levels of need. Allocations of housing services must match households to appropriate services that continuously fluctuate in availability, while inefficiencies in allocation could “waste” scarce resources as households will remain in-need and re-enter the homeless system, increasing the overall demand for homeless services. This complex allocation problem introduces novel technical and ethical challenges. Using administrative data from a regional homeless system, we formulate the problem of “optimal” allocation of resources given data on households with need for homeless services. The optimization problem aims to allocate available resources such that predicted probabilities of household re-entry are minimized. The key element of this work is its use of a counterfactual prediction approach that predicts household probabilities of re-entry into homeless services if assigned to each service. Through these counterfactual predictions, we find that this approach has the potential to improve the efficiency of the homeless system by reducing re-entry, and, therefore, system-wide demand. However, efficiency comes with trade-offs - a significant fraction of households are assigned to services that increase probability of re-entry. To address this issue as well as the inherent fairness considerations present in any context where there are insufficient resources to meet demand, we discuss the efficiency, equity, and fairness issues that arise in our work and consider potential implications for homeless policies.  more » « less
Award ID(s):
2127752 2127754 1939677
PAR ID:
10415667
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Artificial Intelligence Research
Volume:
76
ISSN:
1076-9757
Page Range / eLocation ID:
1219 to 1245
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract ObjectiveThe study tests a community- and data-driven approach to homelessness prevention. Federal policies call for efficient and equitable local responses to homelessness. However, the overwhelming demand for limited homeless assistance is challenging without empirically supported decision-making tools and raises questions of whom to serve with scarce resources. Materials and MethodsSystem-wide administrative records capture the delivery of an array of homeless services (prevention, shelter, short-term housing, supportive housing) and whether households reenter the system within 2 years. Counterfactual machine learning identifies which service most likely prevents reentry for each household. Based on community input, predictions are aggregated for subpopulations of interest (race/ethnicity, gender, families, youth, and health conditions) to generate transparent prioritization rules for whom to serve first. Simulations of households entering the system during the study period evaluate whether reallocating services based on prioritization rules compared with services-as-usual. ResultsHomelessness prevention benefited households who could access it, while differential effects exist for homeless households that partially align with community interests. Households with comorbid health conditions avoid homelessness most when provided longer-term supportive housing, and families with children fare best in short-term rentals. No additional differential effects existed for intersectional subgroups. Prioritization rules reduce community-wide homelessness in simulations. Moreover, prioritization mitigated observed reentry disparities for female and unaccompanied youth without excluding Black and families with children. DiscussionLeveraging administrative records with machine learning supplements local decision-making and enables ongoing evaluation of data- and equity-driven homeless services. ConclusionsCommunity- and data-driven prioritization rules more equitably target scarce homeless resources. 
    more » « less
  2. We study critical systems that allocate scarce resources to satisfy basic needs, such as homeless services that provide housing. These systems often support communities disproportionately affected by systemic racial, gender, or other injustices, so it is crucial to design these systems with fairness considerations in mind. To address this problem, we propose a framework for evaluating fairness in contextual resource allocation systems that is inspired by fairness metrics in machine learning. This framework can be applied to evaluate the fairness properties of a historical policy, as well as to impose constraints in the design of new (counterfactual) allocation policies. Our work culminates with a set of incompatibility results that investigate the interplay between the different fairness metrics we propose. Notably, we demonstrate that: 1) fairness in allocation and fairness in outcomes are usually incompatible; 2) policies that prioritize based on a vulnerability score will usually result in unequal outcomes across groups, even if the score is perfectly calibrated; 3) policies using contextual information beyond what is needed to characterize baseline risk and treatment effects can be fairer in their outcomes than those using just baseline risk and treatment effects; and 4) policies using group status in addition to baseline risk and treatment effects are as fair as possible given all available information. Our framework can help guide the discussion among stakeholders in deciding which fairness metrics to impose when allocating scarce resources. 
    more » « less
  3. In many societal resource allocation domains, machine learning methods are increasingly used to either score or rank agents in order to decide which ones should receive either resources (e.g., homeless services) or scrutiny (e.g., child welfare investigations) from social services agencies. An agency’s scoring function typically operates on a feature vector that contains a combination of self-reported features and information available to the agency about individuals or households. This can create incentives for agents to misrepresent their self-reported features in order to receive resources or avoid scrutiny, but agencies may be able to selectively audit agents to verify the veracity of their reports. We study the problem of optimal auditing of agents in such settings. When decisions are made using a threshold on an agent’s score, the optimal audit policy has a surprisingly simple structure, uniformly auditing all agents who could benefit from lying. While this policy can, in general be hard to compute because of the difficulty of identifying the set of agents who could benefit from lying given a complete set of reported types, we also present necessary and sufficient conditions under which it is tractable. We show that the scarce resource setting is more difficult, and exhibit an approximately optimal audit policy in this case. In addition, we show that in either setting verifying whether it is possible to incentivize exact truthfulness is hard even to approximate. However, we also exhibit sufficient conditions for solving this problem optimally, and for obtaining good approximations. 
    more » « less
  4. null (Ed.)
    In many societal resource allocation domains, machine learn- ing methods are increasingly used to either score or rank agents in order to decide which ones should receive either resources (e.g., homeless services) or scrutiny (e.g., child welfare investigations) from social services agencies. An agency’s scoring function typically operates on a feature vector that contains a combination of self-reported features and information available to the agency about individuals or households. This can create incentives for agents to misrepresent their self-reported features in order to receive resources or avoid scrutiny, but agencies may be able to selectively au- dit agents to verify the veracity of their reports. We study the problem of optimal auditing of agents in such settings. When decisions are made using a threshold on an agent’s score, the optimal audit policy has a surprisingly simple structure, uniformly auditing all agents who could benefit from lying. While this policy can, in general be hard to compute because of the difficulty of identifying the set of agents who could benefit from lying given a complete set of reported types, we also present necessary and sufficient conditions under which it is tractable. We show that the scarce resource setting is more difficult, and exhibit an approximately optimal audit policy in this case. In addition, we show that in either setting verifying whether it is possible to incentivize exact truthfulness is hard even to approximate. However, we also exhibit sufficient conditions for solving this problem optimally, and for obtaining good approximations. 
    more » « less
  5. We consider the allocation of scarce societal resources, where a central authority decides which individuals receive which resources under capacity or budget constraints. Several algorithmic fairness criteria have been proposed to guide these procedures, each quantifying a notion of local justice to ensure the allocation is aligned with the principles of the local institution making the allocation. For example, the efficient allocation maximizes overall social welfare, whereas the leximin assignment seeks to help the “neediest first.” Although the “price of fairness” (PoF) of leximin has been studied in prior work, we expand on these results by exploiting the structure inherent in real-world scenarios to provide tighter bounds. We further propose a novel criterion – which we term LoINC (leximin over individually normalized costs) – that maximizes a different but commonly used notion of local justice: prioritizing those benefiting the most from receiving the resources. We derive analogous PoF bounds for LoINC, showing that the price of LoINC is typically much lower than that of leximin. We provide extensive experimental results using both synthetic data and in a real-world setting considering the efficacy of different homelessness interventions. These results show that the empirical PoF tends to be substantially lower than worst-case bounds would imply and allow us to characterize situations where the price of LoINC fairness can be high. 
    more » « less