skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Algorithmic Transparency and Accountability through Crowdsourcing: A Study of the NYC School Admission Lottery
Algorithms are used to aid decision-making for a wide range of public policy decisions. Yet, the details of the algorithmic processes and how to interact with their systems are often inadequately communicated to stakeholders, leaving them frustrated and distrusting of the outcomes of the decisions. Transparency and accountability are critical prerequisites for building trust in the results of decisions and guaranteeing fair and equitable outcomes. Unfortunately, organizations and agencies do not have strong incentives to explain and clarify their decision processes; however, stakeholders are not powerless and can strategically combine their efforts to push for more transparency. In this paper, I discuss the results and lessons learned from such an effort: a parent-led crowdsourcing campaign to increase transparency in the New York City school admission process. NYC famously uses a deferred-acceptance matching algorithm to assign students to schools, but families are given very little, and often wrong, information on the mechanisms of the system in which they have to participate. Furthermore, the odds of matching to specific schools depend on a complex set of priority rules and tie-breaking random (lottery) numbers, whose impact on the outcome is not made clear to students and their families, resulting in many “wasted choices” on students’ ranked lists and a high rate of unmatched students. Using the results of a crowdsourced survey of school application results, I was able to explain how random tie-breakers factored in the admission, adding clarity and transparency to the process. The results highlighted several issues and inefficiencies in the match and made the case for the need for more accountability and verification in the system.  more » « less
Award ID(s):
2218975
PAR ID:
10437955
Author(s) / Creator(s):
Date Published:
Journal Name:
FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
Page Range / eLocation ID:
434 to 443
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Engineering Projects in Community Service (EPICS) High utilizes human-centered design processes to teach high school students how to develop solutions to real-world problems within their communities. The goals of EPICS High are to utilize both principles from engineering and social entrepreneurship to engage high and middle school students as problem-solvers and spark interest in STEM careers. Recently, the Cisco corporate advised fund at the Silicon Valley Community Foundation, granted Arizona State University funds to expand EPICS High to underrepresented students and study the student outcomes from participation in this innovative program. In this exploratory study we combined qualitative methods—in person observations and informal interviews—along with pre and post surveys with high school students, to answer the questions: What skills do students gain and how does their mindset about engineering entrepreneurship develop through participation in EPICS High? Research took place in Title I schools (meaning they have a high number of students from low-income families) as well as non-Title I schools. Our preliminary results show that students made gains in the following areas: their attitudes toward engineering; ability to improve upon existing ideas; incorporating stakeholders; overcoming obstacles; social responsibility; and appreciation of multiple perspectives when solving engineering problems. While males have better baseline scores for most measures, females tend to have the most growth in many of these areas. We conclude that these initial measures show positive outcomes for students participating in EPICS High, and provide questions for further research. 
    more » « less
  2. Research exploring how to support decision-making has often used machine learning to automate or assist human decisions. We take an alternative approach for improving decision-making, using machine learning to help stakeholders surface ways to improve and make fairer decision-making processes. We created "Deliberating with AI", a web tool that enables people to create and evaluate ML models in order to examine strengths and shortcomings of past decision-making and deliberate on how to improve future decisions. We apply this tool to a context of people selection, having stakeholders---decision makers (faculty) and decision subjects (students)---use the tool to improve graduate school admission decisions. Through our case study, we demonstrate how the stakeholders used the web tool to create ML models that they used as boundary objects to deliberate over organization decision-making practices. We share insights from our study to inform future research on stakeholder-centered participatory AI design and technology for organizational decision-making. 
    more » « less
  3. Emerging methods for participatory algorithm design have proposed collecting and aggregating individual stakeholders’ preferences to create algorithmic systems that account for those stakeholders’ values. Drawing on two years of research across two public school districts in the United States, we study how families and school districts use students’ preferences for schools to meet their goals in the context of algorithmic student assignment systems. We find that the design of the preference language, i.e. the structure in which participants must express their needs and goals to the decision-maker, shapes the opportunities for meaningful participation. We define three properties of preference languages – expressiveness, cost, and collectivism – and discuss how these factors shape who is able to participate, and the extent to which they are able to effectively communicate their needs to the decision-maker. Reflecting on these findings, we offer implications and paths forward for researchers and practitioners who are considering applying a preference-based model for participation in algorithmic decision making. 
    more » « less
  4. Montessori pedagogy is a century-old, whole-school system increasingly used in the public sector. In the United States, public Montessori schools are typically Title I schools that mostly serve children of color. The present secondary, exploratory data analysis examined outcomes of 134 children who entered a lottery for admission to public Montessori schools in the northeastern United States at age 3; half were admitted and enrolled and the rest enrolled at other preschool programs. About half of the children were identified as White, and half were identified as African American, Hispanic, or multiracial. Children were tested in the fall when they enrolled and again in the subsequent three springs (i.e., through the kindergarten year) on a range of measures addressing academic outcomes, executive function, and social cognition. Although the Black, Hispanic, and multiracial group tended to score lower in the beginning of preschool in both conditions, by the end of preschool, the scores of Black, Hispanic, and multiracial students enrolled in Montessori schools were not different from the White children; by contrast, such students in the business-as-usual schools continued to perform less well than White children in academic achievement and social cognition. The study has important limitations that lead us to view these findings as exploratory, but taken together with other findings, the results suggest that Montessori education may create an environment that is more conducive to racial and ethnic parity than other school environments. 
    more » « less
  5. Pham, Tien; Solomon, Latasha; Hohil, Myron E. (Ed.)
    Explainable Artificial Intelligence (XAI) is the capability of explaining the reasoning behind the choices made by the machine learning (ML) algorithm which can help understand and maintain the transparency of the decision-making capability of the ML algorithm. Humans make thousands of decisions every day in their lives. Every decision an individual makes, they can explain the reasons behind why they made the choices that they made. Nonetheless, it is not the same in the case of ML and AI systems. Furthermore, XAI was not wideley researched until suddenly the topic was brought forward and has been one of the most relevant topics in AI for trustworthy and transparent outcomes. XAI tries to provide maximum transparency to a ML algorithm by answering questions about how models effectively came up with the output. ML models with XAI will have the ability to explain the rationale behind the results, understand the weaknesses and strengths the learning models, and be able to see how the models will behave in the future. In this paper, we investigate XAI for algorithmic trustworthiness and transparency. We evaluate XAI using some example use cases and by using SHAP (SHapley Additive exPlanations) library and visualizing the effect of features individually and cumulatively in the prediction process. 
    more » « less