In this paper, we consider the problem of learning Boolean formulae from examples obtained by actively querying an oracle that can label these examples as either positive or negative. This problem has received attention in both machine learning as well as formal methods communities, and it has been shown to have exponential worst-case complexity in the general case as well as for many restrictions. In this paper, we focus on learning sparse Boolean formulae which depend on only a small (but unknown) subset of the overall vocabulary of atomic propositions. We propose two algorithms—first, based on binary search in the Hamming space, and the second, based on random walk on the Boolean hypercube, to learn these sparse Boolean formulae with a given confidence. This assumption of sparsity is motivated by the problem of mining explanations for decisions made by artificially intelligent (AI) algorithms, where the explanation of individual decisions may depend on a small but unknown subset of all the inputs to the algorithm. We demonstrate the use of these algorithms in automatically generating explanations of these decisions. These explanations will make intelligent systems more understandable and accountable to human users, facilitate easier audits and provide diagnostic information in themore »
This content will become publicly available on March 22, 2023
Do People Engage Cognitively with AI? Impact of AI Assistance on Incidental Learning
When people receive advice while making difficult decisions, they often make better decisions in the moment and also increase their knowledge in the process. However, such incidental learning can only occur when people cognitively engage with the information they receive and process this information thoughtfully. How do people process the information and advice they receive from AI, and do they engage with it deeply enough to enable learning? To answer these questions, we conducted three experiments in which individuals were asked to make nutritional decisions and received simulated AI recommendations and explanations. In the first experiment, we found that when people were presented with both a recommendation and an explanation before making their choice, they made better decisions than they did when they received no such help, but they did not learn. In the second experiment, participants first made their own choice, and only then saw a recommendation and an explanation from AI; this condition also resulted in improved decisions, but no learning. However, in our third experiment, participants were presented with just an AI explanation but no recommendation and had to arrive at their own decision. This condition led to both more accurate decisions and learning gains. We hypothesize more »
- Award ID(s):
- 2107391
- Publication Date:
- NSF-PAR ID:
- 10366302
- Journal Name:
- 27th International Conference on Intelligent User Interfaces (IUI ’22)
- Page Range or eLocation-ID:
- 794 to 806
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Product retrieval systems have served as the main entry for customers to discover and purchase products online. With increasing concerns on the transparency and accountability of AI systems, studies on explainable information retrieval has received more and more attention in the research community. Interestingly, in the domain of e-commerce, despite the extensive studies on explainable product recommendation, the studies of explainable product search is still in an early stage. In this paper, we study how to construct effective explainable product search by comparing model-agnostic explanation paradigms with model-intrinsic paradigms and analyzing the important factors that determine the performance of product search explanations. We propose an explainable product search model with model-intrinsic interpretability and conduct crowdsourcing to compare it with the state-of-the-art explainable product search model with model-agnostic interpretability. We observe that both paradigms have their own advantages and the effectiveness of search explanations on different properties are affected by different factors. For example, explanation fidelity is more important for user's overall satisfaction on the system while explanation novelty may be more useful in attracting user purchases. These findings could have important implications for the future studies and design of explainable product search engines.
-
Background/Context: After-school programs that focus on integrating computer programming and mathematics in authentic environments are seldomly accessible to students from culturally and linguistically diverse backgrounds, particularly bilingual Latina students in rural contexts. Providing a context that broadens Latina students’ participation in mathematics and computer programming requires educators to carefully examine how verbal and nonverbal language is used to interact and to position students as they learn new concepts in middle school. This is also an important stage for adolescents because they are likely to make decisions about their future careers in STEM. Having access to discourse and teaching practices that invite students to participate in mathematics and computer programming affords them opportunities to engage with these fields. Purpose/Focus of Study: This case study analyzes how small-group interactions mediated the positionings of Cindy, a bilingual Latina, as she learned binary numbers in an after-school program that integrated computer programming and mathematics (CPM). Setting: The Advancing Out-of-School Learning in Mathematics and Engineering (AOLME) program was held in a rural bilingual (Spanish and English) middle school in the Southwest. The after-school program was designed to provide experiences for primarily Latinx students to learn how to integrate mathematics with computer programming using Raspberry Pimore »
-
Explanations can help users of Artificial Intelligent (AI) systems gain a better understanding of the reasoning behind the model’s decision, facilitate their trust in AI, and assist them in making informed decisions. Due to its numerous benefits in improving how users interact and collaborate with AI, this has stirred the AI/ML community towards developing understandable or interpretable models to a larger degree, while design researchers continue to study and research ways to present explanations of these models’ decisions in a coherent form. However, there is still the lack of intentional design effort from the HCI community around these explanation system designs. In this paper, we contribute a framework to support the design and validation of explainable AI systems; one that requires carefully thinking through design decisions at several important decision points. This framework captures key aspects of explanations ranging from target users, to the data, to the AI models in use. We also discuss how we applied our framework to design an explanation interface for trace link prediction of software artifacts.
-
Category learning is fundamental to cognition, but little is known about how it proceeds in real-world environments when learners do not have instructions to search for category-relevant information, do not make overt category decisions, and do not experience direct feedback. Prior research demonstrates that listeners can acquire task-irrelevant auditory categories incidentally as they engage in primarily visuomotor tasks. The current study examines the factors that support this incidental category learning. Three experiments systematically manipulated the relationship of four novel auditory categories with a consistent visual feature (color or location) that informed a simple behavioral keypress response regarding the visual feature. In both an in-person experiment and two online replications with extensions, incidental auditory category learning occurred reliably when category exemplars consistently aligned with visuomotor demands of the primary task, but not when they were misaligned. The presence of an additional irrelevant visual feature that was uncorrelated with the primary task demands neither enhanced nor harmed incidental learning. By contrast, incidental learning did not occur when auditory categories were aligned consistently with one visual feature, but the motor response in the primary task was aligned with another, category-unaligned visual feature. Moreover, category learning did not reliably occur across passive observation ormore »