Equity is core to sustainability, but current interventions to enhance sustainability often fall short in adequately addressing this linkage. Models are important tools for informing action, and their development and use present opportunities to center equity in process and outcomes. This Perspective highlights progress in integrating equity into systems modeling in sustainability science, as well as key challenges, tensions, and future directions. We present a conceptual framework for equity in systems modeling, focused on its distributional, procedural, and recognitional dimensions. We discuss examples of how modelers engage with these different dimensions throughout the modeling process and from across a range of modeling approaches and topics, including water resources, energy systems, air quality, and conservation. Synthesizing across these examples, we identify significant advances in enhancing procedural and recognitional equity by reframing models as tools to explore pluralism in worldviews and knowledge systems; enabling models to better represent distributional inequity through new computational techniques and data sources; investigating the dynamics that can drive inequities by linking different modeling approaches; and developing more nuanced metrics for assessing equity outcomes. We also identify important future directions, such as an increased focus on using models to identify pathways to transform underlying conditions that lead to inequities and move toward desired futures. By looking at examples across the diverse fields within sustainability science, we argue that there are valuable opportunities for mutual learning on how to use models more effectively as tools to support sustainable and equitable futures.
more »
« less
Embed systemic equity throughout industrial ecology applications: How to address machine learning unfairness and bias
Abstract Recent calls have been made for equity tools and frameworks to be integrated throughout the research and design life cycle —from conception to implementation—with an emphasis on reducing inequity in artificial intelligence (AI) and machine learning (ML) applications. Simply stating that equity should be integrated throughout, however, leaves much to be desired as industrial ecology (IE) researchers, practitioners, and decision‐makers attempt to employ equitable practices. In this forum piece, we use a critical review approach to explain how socioecological inequities emerge in ML applications across their life cycle stages by leveraging the food system. We exemplify the use of a comprehensive questionnaire to delineate unfair ML bias across data bias, algorithmic bias, and selection and deployment bias categories. Finally, we provide consolidated guidance and tailored strategies to help address AI/ML unfair bias and inequity in IE applications. Specifically, the guidance and tools help to address sensitivity, reliability, and uncertainty challenges. There is also discussion on how bias and inequity in AI/ML affect other IE research and design domains, besides the food system—such as living labs and circularity. We conclude with an explanation of the future directions IE should take to address unfair bias and inequity in AI/ML. Last, we call for systemic equity to be embedded throughout IE applications to fundamentally understand domain‐specific socioecological inequities, identify potential unfairness in ML, and select mitigation strategies in a manner that translates across different research domains.
more »
« less
- Award ID(s):
- 2236080
- PAR ID:
- 10529435
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Journal of Industrial Ecology
- Volume:
- 28
- Issue:
- 6
- ISSN:
- 1088-1980
- Format(s):
- Medium: X Size: p. 1362-1376
- Size(s):
- p. 1362-1376
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Hoadley, C; Wang, XC (Ed.)The present study examined teachers’ conceptualization of the role of AI in addressing inequity. Grounded in speculative design and education, we examined eight secondary public teachers’ thinking about AI in teaching and learning that may go beyond present horizons. Data were collected from individual interviews. Findings suggest that not only equity consciousness but also present engagement in a context of inequities were crucial to future dreaming of AI that does not harm but improve equity.more » « less
-
Abstract In response to Li, Reigh, He, and Miller's commentary,Can we and should we use artificial intelligence for formative assessment in science, we argue that artificial intelligence (AI) is already being widely employed in formative assessment across various educational contexts. While agreeing with Li et al.'s call for further studies on equity issues related to AI, we emphasize the need for science educators to adapt to the AI revolution that has outpaced the research community. We challenge the somewhat restrictive view of formative assessment presented by Li et al., highlighting the significant contributions of AI in providing formative feedback to students, assisting teachers in assessment practices, and aiding in instructional decisions. We contend that AI‐generated scores should not be equated with the entirety of formative assessment practice; no single assessment tool can capture all aspects of student thinking and backgrounds. We address concerns raised by Li et al. regarding AI bias and emphasize the importance of empirical testing and evidence‐based arguments in referring to bias. We assert that AI‐based formative assessment does not necessarily lead to inequity and can, in fact, contribute to more equitable educational experiences. Furthermore, we discuss how AI can facilitate the diversification of representational modalities in assessment practices and highlight the potential benefits of AI in saving teachers’ time and providing them with valuable assessment information. We call for a shift in perspective, from viewing AI as a problem to be solved to recognizing its potential as a collaborative tool in education. We emphasize the need for future research to focus on the effective integration of AI in classrooms, teacher education, and the development of AI systems that can adapt to diverse teaching and learning contexts. We conclude by underlining the importance of addressing AI bias, understanding its implications, and developing guidelines for best practices in AI‐based formative assessment.more » « less
-
Machine Learning (ML) algorithms are increasingly used in our daily lives, yet often exhibit discrimination against protected groups. In this talk, I discuss the growing concern of bias in ML and overview existing approaches to address fairness issues. Then, I present three novel approaches developed by my research group. The first leverages generative AI to eliminate biases in training datasets, the second tackles non-convex problems arise in fair learning, and the third introduces a matrix decomposition-based post-processing approach to identify and eliminate unfair model components.more » « less
-
Abstract Artificial intelligence (AI) can be used to improve performance across a wide range of Earth system prediction tasks. As with any application of AI, it is important for AI to be developed in an ethical and responsible manner to minimize bias and other effects. In this work, we extend our previous work demonstrating how AI can go wrong with weather and climate applications by presenting a categorization of bias for AI in the Earth sciences. This categorization can assist AI developers to identify potential biases that can affect their model throughout the AI development life cycle. We highlight examples from a variety of Earth system prediction tasks of each category of bias.more » « less
An official website of the United States government
