This paper describes an AI Book Club as an innovative 20-hour professional development (PD) model designed to prepare teachers with AI content knowledge and an understanding of the ethical issues posed by bias in AI that are foundational to developing AI-literate citizens. The design of the intervention was motivated by a desire to manage the cognitive load of AI learning by spreading the PD program over several weeks and a desire to form and maintain a community of teachers interested in AI education during the COVID-19 pandemic. Each week participants spent an hour independently reading selections from an AI book, reviewing AI activities, and viewing videos of other educators teaching the activities, then met online for 1 hour to discuss the materials and brainstorm how they might adapt the materials for their classrooms. The participants in the AI Book Club were 37 middle school educators from 3 US school districts and 5 youth-serving organizations. The teachers are from STEM disciplines as well as Social Studies and Art. Eighty-nine percent were from underrepresented groups in STEM and CS. In this paper we describe the design of the AI Book Club, its implementation, and preliminary findings on teachers' impressions of the AI Book Club as a form of PD, thoughts about teaching AI in classrooms, and interest in continuing the book club model in the upcoming year. We conclude with recommendations for others interested in implementing a book club PD format for AI learning. 
                        more » 
                        « less   
                    
                            
                            Challenges for AI Regulation in Health and for Healthcare Organizations: Notes from the University of Florida's NSF-Sponsored Workshop on AI Governance
                        
                    
    
            Artificial intelligence (AI) has impacted human life at many levels, entailing economic and societal changes. AI algorithms are increasingly used by organizations to generate predictions that feed into decisions (e.g., who is eligible for insurance coverage, approved for bank loans, selected for job interviews). Since the data used for developing the algorithms can contain bias such as gender or racial prejudice, AI predictions can become discriminatory. For-profit and not-for-profit organizations face the hurdles of developing, applying, and maintaining governance of AI, making sure that goal optimization responds to ethical and fairness values. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2221818
- PAR ID:
- 10502129
- Publisher / Repository:
- ACM SIGBioinformatics Record
- Date Published:
- Journal Name:
- ACM SIGBioinformatics Record
- Volume:
- 12
- Issue:
- 1
- ISSN:
- 2331-9291
- Page Range / eLocation ID:
- 1 to 3
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            In this post I want to talk about using generative AI to extend one of my academic software projects—the Python Tutor tool for learning programming—with an AI chat tutor. We often hear about GenAI being used in large-scale commercial settings, but we don’t hear nearly as much about smaller-scale not-for-profit projects. Thus, this post serves as a case study of adding generative AI into a personal project where I didn’t have much time, resources, or expertise at my disposal. Working on this project got me really excited about being here at this moment right as powerful GenAI tools are starting to become more accessible to nonexperts like myself.more » « less
- 
            Public sector leverages artificial intelligence (AI) to enhance the efficiency, transparency, and accountability of civic operations and public services. This includes initiatives such as predictive waste management, facial recognition for identification, and advanced tools in the criminal justice system. While public-sector AI can improve efficiency and accountability, it also has the potential to perpetuate biases, infringe on privacy, and marginalize vulnerable groups. Responsible AI (RAI) research aims to address these concerns by focusing on fairness and equity through participatory AI. We invite researchers, community members, and public sector workers to collaborate on designing, developing, and deploying RAI systems that enhance public sector accountability and transparency. Key topics include raising awareness of AI's impact on the public sector, improving access to AI auditing tools, building public engagement capacity, fostering early community involvement to align AI innovations with public needs, and promoting accessible and inclusive participation in AI development. The workshop will feature two keynotes, two short paper sessions, and three discussion-oriented activities. Our goal is to create a platform for exchanging ideas and developing strategies to design community-engaged RAI systems while mitigating the potential harms of AI and maximizing its benefits in the public sector.more » « less
- 
            Artificial intelligence (AI) and machine learning models are being increasingly deployed in real-world applications. In many of these applications, there is strong motivation to develop hybrid systems in which humans and AI algorithms can work together, leveraging their complementary strengths and weaknesses. We develop a Bayesian framework for combining the predictions and different types of confidence scores from humans and machines. The framework allows us to investigate the factors that influence complementarity, where a hybrid combination of human and machine predictions leads to better performance than combinations of human or machine predictions alone. We apply this framework to a large-scale dataset where humans and a variety of convolutional neural networks perform the same challenging image classification task. We show empirically and theoretically that complementarity can be achieved even if the human and machine classifiers perform at different accuracy levels as long as these accuracy differences fall within a bound determined by the latent correlation between human and machine classifier confidence scores. In addition, we demonstrate that hybrid human–machine performance can be improved by differentiating between the errors that humans and machine classifiers make across different class labels. Finally, our results show that eliciting and including human confidence ratings improve hybrid performance in the Bayesian combination model. Our approach is applicable to a wide variety of classification problems involving human and machine algorithms.more » « less
- 
            AI is rapidly emerging as a tool that can be used by everyone, increasing its impact on our lives, society, and the economy. There is a need to develop educational programs and curricula that can increase capacity and diversity in AI as well as awareness of the implications of using AI-driven technologies. This paper reports on a workshop whose goals include developing guidelines for ensuring that we expand the diversity of people engaged in AI while expanding the capacity for AI curricula with a scope of content that will reflectthe competencies and needs of the workforce. The scope for AI education included K-Gray and considered AI knowledge and competencies as well as AI literacy (including responsible use and ethical issues). Participants discussed recommendations for metrics measuring capacity and diversity as well as strategies for increasing capacity and diversity at different level of education: K-12, undergraduate and graduate Computer Science (CS) majors and non-CS majors, the workforce, and the public.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    