skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Decoding human intent using a wearable system and multi-modal sensor data
Despite the phenomenal advances in the computational power of electronic systems, human-machine interaction has been largely limited to simple control panels, such as keyboards and mice, which only use physical senses. Consequently, these systems either rely critically on close human guidance or operate almost independently. A richer experience can be achieved if cognitive inputs are used in addition to the physical senses. Towards this end, this paper introduces a simple wearable system that consists of a motion processing unit and brain-machine interface. We show that our system can successfully employ cognitive indicators to predict human activity.  more » « less
Award ID(s):
1651624
PAR ID:
10062469
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2016 50th Asilomar Conference on Signals, Systems and Computers
Page Range / eLocation ID:
846 to 850
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In the past decade, both cognitive science and the learning sciences have been significantly altered by an increased attention to the theme of embodiment. Broadly speaking, this theme complements (or pushes back against) the notion of purely abstract, “disembodied” cognition and emphasizes the role of physical interaction with the environment in the course of learning and development. A common, if usually implicit, assumption in this work is that learners’ bodies are more or less constant from one era to another: after all, human senses, limbs, physiology, and the basic parameters of cognition are part of an ongoing evolutionary human endowment. This assumption, while historically reasonable, is likely to need reconsideration in the near future, as a variety of “transhumanist” technologies (enhanced senses, bodies, and internalized interfaces with the outside physical environment) become more prevalent in children’s lives. This paper discusses several foundational issues and questions that are poised to emerge, and to challenge our enduring ideas about children and education, in the foreseeable future. 
    more » « less
  2. null (Ed.)
    We review recent theoretical and empirical work on the emergence of relational reasoning, drawing connections among the fields of comparative psychology, developmental psychology, cognitive neuroscience, cognitive science, and machine learning. Relational learning appears to involve multiple systems: a suite of Early Systems that are available to human infants and are shared to some extent with nonhuman animals; and a Late System that emerges in humans only, at approximately age three years. The Late System supports reasoning with explicit role-governed relations, and is closely tied to the functions of a frontoparietal network in the human brain. Recent work in cognitive science and machine learning suggests that humans (and perhaps machines) may acquire abstract relations from nonrelational inputs by means of processes that enable re-representation. 
    more » « less
  3. Howes, Christine; Dobnik, Simon; Breitholtz, Ellen; Chatzikyriakidis, Stergios (Ed.)
    As AI reaches wider adoption, designing systems that are explainable and interpretable be- comes a critical necessity. In particular, when it comes to dialogue systems, their reasoning must be transparent and must comply with human intuitions in order for them to be inte- grated seamlessly into day-to-day collaborative human-machine activities. Here, we de- scribe our ongoing work on a (general purpose) dialogue system equipped with a spatial specialist with explanatory capabilities. We applied this system to a particular task of char- acterizing spatial configurations of blocks in a simple physical Blocks World (BW) domain using natural locative expressions, as well as generating justifications for the proposed spa- tial descriptions by indicating the factors that the system used to arrive at a particular conclu- sion. 
    more » « less
  4. null (Ed.)
    Mixed-initiative visual analytics systems incorporate well-established design principles that improve users' abilities to solve problems. As these systems consider whether to take initiative towards achieving user goals, many current systems address the potential for cognitive bias in human initiatives statically, relying on fixed initiatives they can take instead of identifying, communicating and addressing the bias as it occurs. We argue that mixed-initiative design principles can and should incorporate cognitive bias mitigation strategies directly through development of mitigation techniques embedded in the system to address cognitive biases in situ. We identify domain experts in machine learning adopting visual analytics techniques and systems that incorporate existing mixed-initiative principles and examine their potential to support bias mitigation strategies. This examination considers the unique perspective these experts bring to visual analytics and is situated in existing user-centered systems that make exemplary use of design principles informed by cognitive theory. We then suggest informed opportunities for domain experts to take initiative toward addressing cognitive biases in light of their existing contributions to the field. Finally, we contribute open questions and research directions for designers seeking to adopt visual analytics techniques that incorporate bias-aware initiatives in future systems. 
    more » « less
  5. Abstract The remarkable successes of convolutional neural networks (CNNs) in modern computer vision are by now well known, and they are increasingly being explored as computational models of the human visual system. In this paper, we ask whether CNNs might also provide a basis for modeling higher‐level cognition, focusing on the core phenomena of similarity and categorization. The most important advance comes from the ability of CNNs to learn high‐dimensional representations of complex naturalistic images, substantially extending the scope of traditional cognitive models that were previously only evaluated with simple artificial stimuli. In all cases, the most successful combinations arise when CNN representations are used with cognitive models that have the capacity to transform them to better fit human behavior. One consequence of these insights is a toolkit for the integration of cognitively motivated constraints back into CNN training paradigms in computer vision and machine learning, and we review cases where this leads to improved performance. A second consequence is a roadmap for how CNNs and cognitive models can be more fully integrated in the future, allowing for flexible end‐to‐end algorithms that can learn representations from data while still retaining the structured behavior characteristic of human cognition. 
    more » « less