Abstract Recent calls have been made for equity tools and frameworks to be integrated throughout the research and design life cycle —from conception to implementation—with an emphasis on reducing inequity in artificial intelligence (AI) and machine learning (ML) applications. Simply stating that equity should be integrated throughout, however, leaves much to be desired as industrial ecology (IE) researchers, practitioners, and decision‐makers attempt to employ equitable practices. In this forum piece, we use a critical review approach to explain how socioecological inequities emerge in ML applications across their life cycle stages by leveraging the food system. We exemplify the use of a comprehensive questionnaire to delineate unfair ML bias across data bias, algorithmic bias, and selection and deployment bias categories. Finally, we provide consolidated guidance and tailored strategies to help address AI/ML unfair bias and inequity in IE applications. Specifically, the guidance and tools help to address sensitivity, reliability, and uncertainty challenges. There is also discussion on how bias and inequity in AI/ML affect other IE research and design domains, besides the food system—such as living labs and circularity. We conclude with an explanation of the future directions IE should take to address unfair bias and inequity in AI/ML. Last, we call for systemic equity to be embedded throughout IE applications to fundamentally understand domain‐specific socioecological inequities, identify potential unfairness in ML, and select mitigation strategies in a manner that translates across different research domains.
more »
« less
Understanding Enablers and Barriers for Deploying AI/ML in Humanitarian Organizations: the Case of DRC's Foresight
Artificial Intelligence (AI) and Machine Learning (ML) capabilities have the potential for large-scale impact to tackle some of the world’s most pressing humanitarian challenges and help alleviate the suffering of millions of people. Although AI and ML systems have been leveraged and deployed by many humanitarian organizations, it remains unclear which factors contributed to their successful implementation and adoption. In this study, we aim to understand what it takes to deploy AI and ML capabilities successfully within the humanitarian ecosystem and identify challenges to be overcome. This preliminary research examines the deployment and application of an ML model developed by the Danish Refugee Council (DRC) for predicting forced displacement. We use qualitative methods to identify key barriers and enablers from a variety of sources describing the deployment of their Foresight model, a machine learning-based predictive tool. These results can help the humanitarian community to better understand enablers and barriers for deploying and scaling up AI and ML solutions. We hope this paper can spark discussions about the successful deployments of AI and ML capabilities and encourage sharing of best practices by the humanitarian community.
more »
« less
- Award ID(s):
- 2125677
- PAR ID:
- 10448593
- Date Published:
- Journal Name:
- Proceedings of the IISE Annual Conference & Expo 2023
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract In agriculture, important unanswered questions about machine learning and artificial intelligence (ML/AI) include will ML/AI change how food is produced and will ML algorithms replace or partially replace farmers in the decision process. As ML/AI technologies become more accurate, they have the potential to improve profitability while reducing the impact of agriculture on the environment. However, despite these benefits, there are many adoption barriers including cost, and that farmers may be reluctant to adopt a decision tool they do not understand. The goal of this special issue is to discuss cutting‐edge research on the use of ML/AI technologies in agriculture, barriers to the adoption of these technologies, and how technologies can affect our current workforce. The papers are separated into three sections: Machine Learning within Crops, Pasture, and Irrigation; Machine Learning in Predicting Crop Disease; and Society and Policy of Machine Learning.more » « less
-
Machine learning (ML) plays an increasingly important role in improving a user's experience. However, most UX practitioners face challenges in understanding ML's capabilities or envisioning what it might be. We interviewed 13 designers who had many years of experience designing the UX of ML-enhanced products and services. We probed them to characterize their practices. They shared they do not view themselves as ML experts, nor do they think learning more about ML would make them better designers. Instead, our participants appeared to be the most successful when they engaged in ongoing collaboration with data scientists to help envision what to make and when they embraced a data-centric culture. We discuss the implications of these findings in terms of UX education and as opportunities for additional design research in support of UX designers working with ML.more » « less
-
Chen, Guohua; Khan, Faisal (Ed.)Artificial intelligence (AI) and machine learning (ML) are novel techniques to detect hidden patterns in environmental data. Despite their capabilities, these novel technologies have not been seriously used for real-world problems, such as real-time environmental monitoring. This survey established a framework to advance the novel applications of AI and ML techniques such as Tiny Machine Learning (TinyML) in water environments. The survey covered deep learning models and their advantages over classical ML models. The deep learning algorithms are the heart of TinyML models and are of paramount importance for practical uses in water environments. This survey highlighted the capabilities and discussed the possible applications of the TinyML models in water environments. This study indicated that the TinyML models on microcontrollers are useful for a number of cutting-edge problems in water environments, especially for monitoring purposes. The TinyML models on microcontrollers allow for in situ real-time environmental monitoring without transferring data to the cloud. It is concluded that monitoring systems based on TinyML models offer cheap tools to autonomously track pollutants in water and can replace traditional monitoring methods.more » « less
-
Pham, Tien; Solomon, Latasha; Hohil, Myron E. (Ed.)Explainable Artificial Intelligence (XAI) is the capability of explaining the reasoning behind the choices made by the machine learning (ML) algorithm which can help understand and maintain the transparency of the decision-making capability of the ML algorithm. Humans make thousands of decisions every day in their lives. Every decision an individual makes, they can explain the reasons behind why they made the choices that they made. Nonetheless, it is not the same in the case of ML and AI systems. Furthermore, XAI was not wideley researched until suddenly the topic was brought forward and has been one of the most relevant topics in AI for trustworthy and transparent outcomes. XAI tries to provide maximum transparency to a ML algorithm by answering questions about how models effectively came up with the output. ML models with XAI will have the ability to explain the rationale behind the results, understand the weaknesses and strengths the learning models, and be able to see how the models will behave in the future. In this paper, we investigate XAI for algorithmic trustworthiness and transparency. We evaluate XAI using some example use cases and by using SHAP (SHapley Additive exPlanations) library and visualizing the effect of features individually and cumulatively in the prediction process.more » « less
An official website of the United States government

