Artificial Intelligence (AI) enhanced systems are widely adopted in post-secondary education, however, tools and activities have only recently become accessible for teaching AI and machine learning (ML) concepts to K-12 students. Research on K-12 AI education has largely included student attitudes toward AI careers, AI ethics, and student use of various existing AI agents such as voice assistants; most of which has focused on high school and middle school. There is no consensus on which AI and Machine Learning concepts are grade-appropriate for elementary-aged students or how elementary students explore and make sense of AI and ML tools. AI is a rapidly evolving technology and as future decision-makers, children will need to be AI literate[1]. In this paper, we will present elementary students’ sense-making of simple machine-learning concepts. Through this project, we hope to generate a new model for introducing AI concepts to elementary students into school curricula and provide tangible, trainable representations of ML for students to explore in the physical world. In our first year, our focus has been on simpler machine learning algorithms. Our desire is to empower students to not only use AI tools but also to understand how they operate. We believe that appropriate activities can help late elementary-aged students develop foundational AI knowledge namely (1) how a robot senses the world, and (2) how a robot represents data for making decisions. Educational robotics programs have been repeatedly shown to result in positive learning impacts and increased interest[2]. In this pilot study, we leveraged the LEGO® Education SPIKE™ Prime for introducing ML concepts to upper elementary students. Through pilot testing in three one-week summer programs, we iteratively developed a limited display interface for supervised learning using the nearest neighbor algorithm. We collected videos to perform a qualitative evaluation. Based on analyzing student behavior and the process of students trained in robotics, we found some students show interest in exploring pre-trained ML models and training new models while building personally relevant robotic creations and developing solutions to engineering tasks. While students were interested in using the ML tools for complex tasks, they seemed to prefer to use block programming or manual motor controls where they felt it was practical. 
                        more » 
                        « less   
                    
                            
                            Deliberating with AI: Improving Decision-Making for the Future through Participatory AI Design and Stakeholder Deliberation
                        
                    
    
            Research exploring how to support decision-making has often used machine learning to automate or assist human decisions. We take an alternative approach for improving decision-making, using machine learning to help stakeholders surface ways to improve and make fairer decision-making processes. We created "Deliberating with AI", a web tool that enables people to create and evaluate ML models in order to examine strengths and shortcomings of past decision-making and deliberate on how to improve future decisions. We apply this tool to a context of people selection, having stakeholders---decision makers (faculty) and decision subjects (students)---use the tool to improve graduate school admission decisions. Through our case study, we demonstrate how the stakeholders used the web tool to create ML models that they used as boundary objects to deliberate over organization decision-making practices. We share insights from our study to inform future research on stakeholder-centered participatory AI design and technology for organizational decision-making. 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10463942
- Date Published:
- Journal Name:
- Proceedings of the ACM on Human-Computer Interaction
- Volume:
- 7
- Issue:
- CSCW1
- ISSN:
- 2573-0142
- Page Range / eLocation ID:
- 1 to 32
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Explanations can help users of Artificial Intelligent (AI) systems gain a better understanding of the reasoning behind the model’s decision, facilitate their trust in AI, and assist them in making informed decisions. Due to its numerous benefits in improving how users interact and collaborate with AI, this has stirred the AI/ML community towards developing understandable or interpretable models to a larger degree, while design researchers continue to study and research ways to present explanations of these models’ decisions in a coherent form. However, there is still the lack of intentional design effort from the HCI community around these explanation system designs. In this paper, we contribute a framework to support the design and validation of explainable AI systems; one that requires carefully thinking through design decisions at several important decision points. This framework captures key aspects of explanations ranging from target users, to the data, to the AI models in use. We also discuss how we applied our framework to design an explanation interface for trace link prediction of software artifacts.more » « less
- 
            Pham, Tien; Solomon, Latasha; Hohil, Myron E. (Ed.)Explainable Artificial Intelligence (XAI) is the capability of explaining the reasoning behind the choices made by the machine learning (ML) algorithm which can help understand and maintain the transparency of the decision-making capability of the ML algorithm. Humans make thousands of decisions every day in their lives. Every decision an individual makes, they can explain the reasons behind why they made the choices that they made. Nonetheless, it is not the same in the case of ML and AI systems. Furthermore, XAI was not wideley researched until suddenly the topic was brought forward and has been one of the most relevant topics in AI for trustworthy and transparent outcomes. XAI tries to provide maximum transparency to a ML algorithm by answering questions about how models effectively came up with the output. ML models with XAI will have the ability to explain the rationale behind the results, understand the weaknesses and strengths the learning models, and be able to see how the models will behave in the future. In this paper, we investigate XAI for algorithmic trustworthiness and transparency. We evaluate XAI using some example use cases and by using SHAP (SHapley Additive exPlanations) library and visualizing the effect of features individually and cumulatively in the prediction process.more » « less
- 
            The use of AI-based decision aids in diverse domains has inspired many empirical investigations into how AI models’ decision recommendations impact humans’ decision accuracy in AI-assisted decision making, while explorations on the impacts on humans’ decision fairness are largely lacking despite their clear importance. In this paper, using a real-world business decision making scenario—bidding in rental housing markets—as our testbed, we present an experimental study on understanding how the bias level of the AI-based decision aid as well as the provision of AI explanations affect the fairness level of humans’ decisions, both during and after their usage of the decision aid. Our results suggest that when people are assisted by an AI-based decision aid, both the higher level of racial biases the decision aid exhibits and surprisingly, the presence of AI explanations, result in more unfair human decisions across racial groups. Moreover, these impacts are partly made through triggering humans’ “disparate interactions” with AI. However, regardless of the AI bias level and the presence of AI explanations, when people return to make independent decisions after their usage of the AI-based decision aid, their decisions no longer exhibit significant unfairness across racial groups.more » « less
- 
            When people receive advice while making difficult decisions, they often make better decisions in the moment and also increase their knowledge in the process. However, such incidental learning can only occur when people cognitively engage with the information they receive and process this information thoughtfully. How do people process the information and advice they receive from AI, and do they engage with it deeply enough to enable learning? To answer these questions, we conducted three experiments in which individuals were asked to make nutritional decisions and received simulated AI recommendations and explanations. In the first experiment, we found that when people were presented with both a recommendation and an explanation before making their choice, they made better decisions than they did when they received no such help, but they did not learn. In the second experiment, participants first made their own choice, and only then saw a recommendation and an explanation from AI; this condition also resulted in improved decisions, but no learning. However, in our third experiment, participants were presented with just an AI explanation but no recommendation and had to arrive at their own decision. This condition led to both more accurate decisions and learning gains. We hypothesize that learning gains in this condition were due to deeper engagement with explanations needed to arrive at the decisions. This work provides some of the most direct evidence to date that it may not be sufficient to provide people with AI-generated recommendations and explanations to ensure that people engage carefully with the AI-provided information. This work also presents one technique that enables incidental learning and, by implication, can help people process AI recommendations and explanations more carefully.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    