skip to main content


Search for: All records

Award ID contains: 2127754

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Objective

    The study tests a community- and data-driven approach to homelessness prevention. Federal policies call for efficient and equitable local responses to homelessness. However, the overwhelming demand for limited homeless assistance is challenging without empirically supported decision-making tools and raises questions of whom to serve with scarce resources.

    Materials and Methods

    System-wide administrative records capture the delivery of an array of homeless services (prevention, shelter, short-term housing, supportive housing) and whether households reenter the system within 2 years. Counterfactual machine learning identifies which service most likely prevents reentry for each household. Based on community input, predictions are aggregated for subpopulations of interest (race/ethnicity, gender, families, youth, and health conditions) to generate transparent prioritization rules for whom to serve first. Simulations of households entering the system during the study period evaluate whether reallocating services based on prioritization rules compared with services-as-usual.

    Results

    Homelessness prevention benefited households who could access it, while differential effects exist for homeless households that partially align with community interests. Households with comorbid health conditions avoid homelessness most when provided longer-term supportive housing, and families with children fare best in short-term rentals. No additional differential effects existed for intersectional subgroups. Prioritization rules reduce community-wide homelessness in simulations. Moreover, prioritization mitigated observed reentry disparities for female and unaccompanied youth without excluding Black and families with children.

    Discussion

    Leveraging administrative records with machine learning supplements local decision-making and enables ongoing evaluation of data- and equity-driven homeless services.

    Conclusions

    Community- and data-driven prioritization rules more equitably target scarce homeless resources.

     
    more » « less
  2. Group-fair learning methods typically seek to ensure that some measure of prediction efficacy for (often historically) disadvantaged minority groups is comparable to that for the majority of the population. When a principal seeks to adopt a group-fair approach to replace another, the principal may face opposition from those who feel they may be harmed by the switch, and this, in turn, may deter adoption. We propose that a potential mitigation to this concern is to ensure that a group-fair model is also popular, in the sense that, for a majority of the target population, it yields a preferred distribution over outcomes compared with the conventional model. In this paper, we show that state of the art fair learning approaches are often unpopular in this sense. We propose several efficient algorithms for postprocessing an existing group-fair learning scheme to improve its popularity while retaining fairness. Through extensive experiments, we demonstrate that the proposed postprocessing approaches are highly effective in practice. 
    more » « less
  3. Artificial intelligence, machine learning, and algorithmic techniques in general, provide two crucial abilities with the potential to improve decision-making in the context of allocation of scarce societal resources. They have the ability to flexibly and accurately model treatment response at the individual level, potentially allowing us to better match available resources to individuals. In addition, they have the ability to reason simultaneously about the effects of matching sets of scarce resources to populations of individuals. In this work, we leverage these abilities to study algorithmic allocation of scarce societal resources in the context of homelessness. In communities throughout the United States, there is constant demand for an array of homeless services intended to address different levels of need. Allocations of housing services must match households to appropriate services that continuously fluctuate in availability, while inefficiencies in allocation could “waste” scarce resources as households will remain in-need and re-enter the homeless system, increasing the overall demand for homeless services. This complex allocation problem introduces novel technical and ethical challenges. Using administrative data from a regional homeless system, we formulate the problem of “optimal” allocation of resources given data on households with need for homeless services. The optimization problem aims to allocate available resources such that predicted probabilities of household re-entry are minimized. The key element of this work is its use of a counterfactual prediction approach that predicts household probabilities of re-entry into homeless services if assigned to each service. Through these counterfactual predictions, we find that this approach has the potential to improve the efficiency of the homeless system by reducing re-entry, and, therefore, system-wide demand. However, efficiency comes with trade-offs - a significant fraction of households are assigned to services that increase probability of re-entry. To address this issue as well as the inherent fairness considerations present in any context where there are insufficient resources to meet demand, we discuss the efficiency, equity, and fairness issues that arise in our work and consider potential implications for homeless policies. 
    more » « less
  4. The increasing automation of high-stakes decisions with direct impact on the lives and well-being of individuals raises a number of important considerations. Prominent among these is strategic behavior by individuals hoping to achieve a more desirable outcome. Two forms of such behavior are commonly studied: 1) misreporting of individual attributes, and 2) recourse, or actions that truly change such attributes. The former involves deception, and is inherently undesirable, whereas the latter may well be a desirable goal insofar as it changes true individual qualification. We study misreporting and recourse as strategic choices by individuals within a unified framework. In particular, we propose auditing as a means to incentivize recourse actions over attribute manipulation, and characterize optimal audit policies for two types of principals, utility-maximizing and recourse-maximizing. Additionally, we consider subsidies as an incentive for recourse over manipulation, and show that even a utility-maximizing principal would be willing to devote a considerable amount of audit budget to providing such subsidies. Finally, we consider the problem of optimizing fines for failed audits, and bound the total cost incurred by the population as a result of audits. 
    more » « less
  5. The use of algorithmic decision making systems in domains which impact the financial, social, and political well-being of people has created a demand for these to be “fair” under some accepted notion of equity. This demand has in turn inspired a large body of work focused on the development of fair learning algorithms which are then used in lieu of their conventional counterparts. Most analysis of such fair algorithms proceeds from the assumption that the people affected by the algorithmic decisions are represented as immutable feature vectors. However, strategic agents may possess both the ability and the incentive to manipulate this observed feature vector in order to attain a more favorable outcome. We explore the impact that strategic agent behavior can have on group-fair classification. We find that in many settings strategic behavior can lead to fairness reversal, with a conventional classifier exhibiting higher fairness than a classifier trained to satisfy group fairness. Further, we show that fairness reversal occurs as a result of a group- fair classifier becoming more selective, achieving fairness largely by excluding individuals from the advantaged group. In contrast, if group fairness is achieved by the classifier becoming more inclusive, fairness reversal does not occur. 
    more » « less