This project aims to broaden AI education by developing and studying the efficacy of innovative learning practices and resources for AI education for social good. We have developed three AI learning modules for students to: 1) identify social issues that align with the SDGs in their community (e.g., poverty, hunger, quality education); 2) learn AI through hands-on labs and business applications; and 3) create AI-powered solutions in teams to address social is-sues they have identified. Student teams are expected to situate AI learning in their communities and contribute to their communities. Students then use the modules to en-gage in an interdisciplinary approach, facilitating AI learn-ing for social good in informational sciences and technology, geography, and computer science at three CSU HSIs (San Jose State University, Cal Poly Pomona and CSU San Bernardino). Finally, we aim to evaluate the efficacy and impact of the proposed AI teaching methods and activities in terms of learning outcomes, student experience, student engagement, and equity.
more »
« less
Becoming Good at AI for Good
AI for good (AI4G) projects involve developing and applying ar- tificial intelligence (AI) based solutions to further goals in areas such as sustainability, health, humanitarian aid, and social justice. Developing and deploying such solutions must be done in collab- oration with partners who are experts in the domain in question and who already have experience in making progress towards such goals. Based on our experiences, we detail the different aspects of this type of collaboration broken down into four high-level cat- egories: communication, data, modeling, and impact, and distill eleven takeaways to guide such projects in the future. We briefly describe two case studies to illustrate how some of these takeaways were applied in practice during our past collaborations.
more »
« less
- Award ID(s):
- 1763108
- PAR ID:
- 10257015
- Date Published:
- Journal Name:
- Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’21), May 19–21, 2021,
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Feasible and developmentally appropriate sociotechnical approaches for protecting youth from online risks have become a paramount concern among human-computer interaction research communities. Therefore, we conducted 38 interviews with entrepreneurs, IT professionals, clinicians, educators, and researchers who currently work in the space of youth online safety to understand the different sociotechnical approaches they proposed to keep youth safe online, while overcoming key challenges associated with these approaches. We identified three approaches taken among these stakeholders, which included 1) leveraging artificial intelligence (AI)/machine learning to detect risks, 2) building security/safety tools, and 3) developing new forms of parental control software. The trade-offs between privacy and protection, as well as other tensions among different stakeholders (e.g., tensions toward the big-tech companies) arose as major challenges, followed by the subjective nature of risk, lack of necessary but proprietary data, and costs to develop these technical solutions. To overcome the challenges, solutions such as building centralized and multi-disciplinary collaborations, creating sustainable business plans, prioritizing human-centered approaches, and leveraging state-of-art AI were suggested. Our contribution to the body of literature is providing evidence-based implications for the design of sociotechnical solutions to keep youth safe online.more » « less
-
We identify and describe episodes of sensemaking around challenges in modern Artificial-Intelligence (AI)-based systems development that emerged in projects carried out by IBM and client companies. All projects used IBM Watson as the development platform for building tailored AI-based solutions to support workers or customers of the client companies. Yet, many of the projects turned out to be significantly more challenging than IBM and its clients had expected. The analysis reveals that project members struggled to establish reliable meanings about the technology, the project, context, and data to act upon. The project members report multiple aspects of the projects that they were not expecting to need to make sense of yet were problematic. Many issues bear upon the current-generation AI’s inherent characteristics, such as dependency on large data sets and continuous improvement as more data becomes available. Those characteristics increase the complexity of the projects and call for balanced mindfulness to avoid unexpected problems.more » « less
-
How do practitioners who develop consumer AI products scope, motivate, and conduct privacy work? Respecting pri- vacy is a key principle for developing ethical, human-centered AI systems, but we cannot hope to better support practitioners without answers to that question. We interviewed 35 industry AI practitioners to bridge that gap. We found that practitioners viewed privacy as actions taken against pre-defined intrusions that can be exacerbated by the capabilities and requirements of AI, but few were aware of AI-specific privacy intrusions documented in prior literature. We found that their privacy work was rigidly defined and situated, guided by compliance with privacy regulations and policies, and generally demoti- vated beyond meeting minimum requirements. Finally, we found that the methods, tools, and resources they used in their privacy work generally did not help address the unique pri- vacy risks introduced or exacerbated by their use of AI in their products. Collectively, these findings reveal the need and opportunity to create tools, resources, and support structures to improve practitioners’ awareness of AI-specific privacy risks, motivations to do AI privacy work, and ability to ad- dress privacy harms introduced or exacerbated by their use of AI in consumer products.more » « less
-
The need for citizens to better understand the ethical and social challenges of algorithmic systems has led to a rapid proliferation of AI literacy initiatives. After reviewing the literature on AI literacy projects, we found that most educational practices in this area are based on teaching programming fundamentals, primarily to K-12 students. This leaves out citizens and those who are primarily interested in understanding the implications of automated decision- making systems, rather than in learning to code. To address these gaps, this article explores the methodological contributions of responsible AI education practices that focus first on stakeholders when designing learning experiences for different audiences and contexts. The article examines the weaknesses identified in current AI literacy projects, explains the stakeholder-first approach, and analyzes several responsible AI education case studies, to illustrate how such an approach can help overcome the aforementioned limitations. The results suggest that the stakeholder-first approach allows to address audiences beyond the usual ones in the field of AI literacy, and to incorporate new content and methodologies depending on the needs of the respective audiences, thus opening new avenues for teaching and research in the field.more » « less
An official website of the United States government

