- Publication Date:
- NSF-PAR ID:
- Journal Name:
- Proceedings of the Eighth Annual Conference on Advances in Cognitive Systems (ACS)
- Sponsoring Org:
- National Science Foundation
More Like this
This paper presents a systematic review of the empirical literature that uses dual-task interference methods for investigating the on-line involvement of language in various cognitive tasks. In these studies, participants perform some primary task X putatively recruiting linguistic resources while also engaging in a secondary, concurrent task. If performance on the primary task decreases under interference, there is evidence for language involvement in the primary task. We assessed studies (N = 101) reporting at least one experiment with verbal interference and at least one control task (either primary or secondary). We excluded papers with an explicitly clinical, neurological, or developmental focus. The primary tasks identified include categorization, memory, mental arithmetic, motor control, reasoning (verbal and visuospatial), task switching, theory of mind, visual change, and visuospatial integration and wayfinding. Overall, the present review found that internal language is likely to play a facilitative role in memory and categorization when items to be remembered or categorized have readily available labels, when inner speech can act as a form of behavioral self-cuing (inhibitory control, task set reminders, verbal strategy), and when inner speech is plausibly useful as “workspace,” for example, for mental arithmetic. There is less evidence for the role of internal languagemore »
Observations abound about the power of visual imagery in human intelligence, from how Nobel prize-winning physicists make their discoveries to how children understand bedtime stories. These observations raise an important question for cognitive science, which is, what are the computations taking place in someone’s mind when they use visual imagery? Answering this question is not easy and will require much continued research across the multiple disciplines of cognitive science. Here, we focus on a related and more circumscribed question from the perspective of artificial intelligence (AI): If you have an intelligent agent that uses visual imagery-based knowledge representations and reasoning operations, then what kinds of problem solving might be possible, and how would such problem solving work? We highlight recent progress in AI toward answering these questions in the domain of visuospatial reasoning, looking at a case study of how imagery-based artificial agents can solve visuospatial intelligence tests. In particular, we first examine several variations of imagery-based knowledge representations and problem-solving strategies that are sufficient for solving problems from the Raven’s Progressive Matrices intelligence test. We then look at how artificial agents, instead of being designed manually by AI researchers, might learn portions of their own knowledge and reasoning proceduresmore »
Visuospatial reasoning refers to a diverse set of skills that involve thinking about space and time. An artificial agent with access to a sufficiently large set of visuospatial reasoning skills might be able to generalize its reasoning ability to an unprecedented expanse of tasks including portions of many popular intelligence tests. In this paper, we stress the importance of a developmental approach to the study of visuospatial reasoning, with an emphasis on fundamental skills. A comprehensive benchmark, with properties we outline in this paper including breadth, depth, explainability, and domain-specificity, would encourage and measure the genesis of such a skillset. Lacking an existing benchmark that satisfies these properties, we outline the design of a novel test in this paper. Such a benchmark would allow for expanding analysis of existing datasets’ and agents’ applicability to the problem of generalized visuospatial reasoning.
Recent years have seen the rapid adoption of artificial intelligence (AI) in every facet of society. The ubiquity of AI has led to an increasing demand to integrate AI learning experiences into K-12 education. Early learning experiences incorporating AI concepts and practices are critical for students to better understand, evaluate, and utilize AI technologies. AI planning is an important class of AI technologies in which an AI-driven agent utilizes the structure of a problem to construct plans of actions to perform a task. Although a growing number of efforts have explored promoting AI education for K-12 learners, limited work has investigated effective and engaging approaches for delivering AI learning experiences to elementary students. In this paper, we propose a visual interface to enable upper elementary students (grades 3-5, ages 8-11) to formulate AI planning tasks within a game-based learning environment. We present our approach to designing the visual interface as well as how the AI planning tasks are embedded within narrative-centered gameplay structured around a Use-Modify-Create scaffolding progression. Further, we present results from a qualitative study of upper elementary students using the visual interface. We discuss how the Use-Modify-Create approach supported student learning as well as discuss the misconceptions andmore »
In this paper, we introduce a novel method to support remote telemanipulation tasks in complex environments by providing operators with an enhanced view of the task environment. Our method features a novel viewpoint adjustment algorithm designed to automatically mitigate occlusions caused by workspace geometry, supports visual exploration to provide operators with situation awareness in the remote environment, and mediates context-specific visual challenges by making viewpoint adjustments based on sparse input from the user. Our method builds on the dynamic camera telemanipulation viewing paradigm, where a user controls a manipulation robot, and a camera-in-hand robot alongside the manipulation robot servos to provide a sufficient view of the remote environment. We discuss the real-time motion optimization formulation used to arbitrate the various objectives in our shared-control-based method, particularly highlighting how our occlusion avoidance and viewpoint adaptation approaches fit within this framework. We present results from an empirical evaluation of our proposed occlusion avoidance approach as well as a user study that compares our telemanipulation shared-control method against alternative telemanipulation approaches. We discuss the implications of our work for future shared-control research and robotics applications.