skip to main content

Title: Combining Fast and Slow Thinking for Human-like and Efficient Decisions in Constrained Environments
Current AI systems lack several important human capabilities, such as adaptability, generalizability, selfcontrol, consistency, common sense, and causal reasoning. We believe that existing cognitive theories of human decision making, such as the thinking fast and slow theory, can provide insights on how to advance AI systems towards some of these capabilities. In this paper, we propose a general architecture that is based on fast/slow solvers and a metacognitive component. We then present experimental results on the behavior of an instance of this architecture, for AI systems that make decisions about navigating in a constrained environment. We show how combining the fast and slow decision modalities, which can be implemented by learning and reasoning components respectively, allows the system to evolve over time and gradually pass from slow to fast thinking with enough experience, and that this greatly helps in decision quality, resource consumption, and efficiency.
; ; ; ; ; ; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
Proceedings of the 16th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy) 2022
Page Range or eLocation-ID:
Sponsoring Org:
National Science Foundation
More Like this
  1. Explanations can help users of Artificial Intelligent (AI) systems gain a better understanding of the reasoning behind the model’s decision, facilitate their trust in AI, and assist them in making informed decisions. Due to its numerous benefits in improving how users interact and collaborate with AI, this has stirred the AI/ML community towards developing understandable or interpretable models to a larger degree, while design researchers continue to study and research ways to present explanations of these models’ decisions in a coherent form. However, there is still the lack of intentional design effort from the HCI community around these explanation system designs. In this paper, we contribute a framework to support the design and validation of explainable AI systems; one that requires carefully thinking through design decisions at several important decision points. This framework captures key aspects of explanations ranging from target users, to the data, to the AI models in use. We also discuss how we applied our framework to design an explanation interface for trace link prediction of software artifacts.
  2. Distributed cyber-infrastructures and Artificial Intelligence (AI) are transformative technologies that will play a pivotal role in the future of society and the scientific community. Internet of Things (IoT) applications harbor vast quantities of connected devices that collect a massive amount of sensitive information (e.g., medical, financial), which is usually analyzed either at the edge or federated cloud systems via AI/Machine Learning (ML) algorithms to make critical decisions (e.g., diagnosis). It is of paramount importance to ensure the security, privacy, and trustworthiness of data collection, analysis, and decision-making processes. However, system complexity and increased attack surfaces make these applications vulnerable to system breaches, single-point of failures, and various cyber-attacks. Moreover, the advances in quantum computing exacerbate the security and privacy challenges. That is, emerging quantum computers can break conventional cryptographic systems that offer cyber-security services, public key infrastructures, and privacy-enhancing technologies. Therefore, there is a vital need for new cyber-security paradigms that can address the resiliency, long-term security, and efficiency requirements of distributed cyber infrastructures. In this work, we propose a vision of distributed architecture and cyber-security framework that uniquely synergizes secure computation, Physical Quantum Key Distribution (PQKD), NIST PostQuantum Cryptography (PQC) efforts, and AI/ML algorithms to achieve breach-resilient, functional, andmore »efficient cyber-security services. At the heart of our proposal lies a new Multi-Party Computation Quantum Network Core (MPC-QNC) that enables fast and yet quantum-safe execution of distributed computation protocols via integration of PQKD infrastructure and hardware acceleration elements. We showcase the capabilities of MPCQNC by instantiating it for Public Key Infrastructures (PKI) and federated ML in our HDQPKI and TPQ-ML, frameworks, respectively. HDQPKI (to the best of our knowledge) is the first hybrid and distributed post-quantum PKI that harnesses PQKD and NIST PQC standards to offer the highest level of quantum safety with a breach-resiliency against active adversaries. TPQ-ML presents a post-quantum secure and privacy-preserving federated ML infrastructure.« less
  3. A received wisdom is that automated decision-making serves as an anti-bias intervention. The conceit is that removing humans from the decision-making process will also eliminate human bias. The paradox, however, is that in some instances, automated decision-making has served to replicate and amplify bias. With a case study of the algorithmic capture of hiring as heuristic device, this Article provides a taxonomy of problematic features associated with algorithmic decision-making as anti-bias intervention and argues that those features are at odds with the fundamental principle of equal opportunity in employment. To examine these problematic features within the context of algorithmic hiring and to explore potential legal approaches to rectifying them, the Article brings together two streams of legal scholarship: law and technology studies and employment & labor law. Counterintuitively, the Article contends that the framing of algorithmic bias as a technical problem is misguided. Rather, the Article’s central claim is that bias is introduced in the hiring process, in large part, due to an American legal tradition of deference to employers, especially allowing for such nebulous hiring criterion as “cultural fit.” The Article observes the lack of legal frameworks that take into account the emerging technological capabilities of hiring tools whichmore »make it difficult to detect disparate impact. The Article thus argues for a re-thinking of legal frameworks that take into account both the liability of employers and those of the makers of algorithmic hiring systems who, as brokers, owe a fiduciary duty of care. Particularly related to Title VII, the Article proposes that in legal reasoning corollary to extant tort doctrines, an employer’s failure to audit and correct its automated hiring platforms for disparate impact could serve as prima facie evidence of discriminatory intent, leading to the development of the doctrine of discrimination per se. The article also considers other approaches separate from employment law such as establishing consumer legal protections for job applicants that would mandate their access to the dossier of information consulted by automated hiring systems in making the employment decision.« less
  4. Explainability has emerged as a critical AI research objective, but the breadth of proposed methods and application domains suggest that criteria for explanation vary greatly. In particular, what counts as a good explanation, and what kinds of explanation are computationally feasible, has become trickier in light of oqaque “black box” systems such as deep neural networks. Explanation in such cases has drifted from what many philosophers stipulated as having to involve deductive and causal principles to mere “interpretation,” which approximates what happened in the target system to varying degrees. However, such post hoc constructed rationalizations are highly problematic for social robots that operate interactively in spaces shared with humans. For in such social contexts, explanations of behavior, and, in particular, justifications for violations of expected behavior, should make reference to socially accepted principles and norms. In this article, we show how a social robot’s actions can face explanatory demands for how it came to act on its decision, what goals, tasks, or purposes its design had those actions pursue and what norms or social constraints the system recognizes in the course of its action. As a result, we argue that explanations for social robots will need to be accurate representationsmore »of the system’s operation along causal, purposive, and justificatory lines. These explanations will need to generate appropriate references to principles and norms—explanations based on mere “interpretability” will ultimately fail to connect the robot’s behaviors to its appropriate determinants. We then lay out the foundations for a cognitive robotic architecture for HRI, together with particular component algorithms, for generating explanations and engaging in justificatory dialogues with human interactants. Such explanations track the robot’s actual decision-making and behavior, which themselves are determined by normative principles the robot can describe and use for justifications.« less
  5. Cyber-Physical-Human Systems (CPHS) interconnect humans, physical plants and cyber infrastructure across space and time. Industrial processes, electromechanical systems operations and medical diagnosis are some examples where one can see the intersection of humans, physical and cyber components. Emergence of Artificial Intelligence (AI) based computational models, controllers and decision support engines have improved the efficiency and cost effectiveness of such systems and processes. These CPHS typically involve a collaborative decision environment, comprising of AI-based models and human experts. Active Learning (AL) is a category of AI algorithms which aims to learn an efficient decision model by combining domain expertise of the human expert and computational capabilities of the AI model. Given the indispensable role of humans and lack of understanding about human behavior in collaborative decision environments, modeling and prediction of behavioral biases is a critical need. This paper, for the first time, introduces different behavioral biases within an AL context and investigates their impacts on the performance of AL strategies. The modelling of behavioral biases is demonstrated using experiments conducted on a real-world pancreatic cancer dataset. It is observed that classification accuracy of the decision model reduces by at least 20% in case of all the behavioral biases.