Abstract We demonstrate that the key components of cognitive architectures (declarative and procedural memory) and their key capabilities (learning, memory retrieval, probability judgment, and utility estimation) can be implemented as algebraic operations on vectors and tensors in a high‐dimensional space using a distributional semantics model. High‐dimensional vector spaces underlie the success of modern machine learning techniques based on deep learning. However, while neural networks have an impressive ability to process data to find patterns, they do not typically model high‐level cognition, and it is often unclear how they work. Symbolic cognitive architectures can capture the complexities of high‐level cognition and provide human‐readable, explainable models, but scale poorly to naturalistic, non‐symbolic, or big data. Vector‐symbolic architectures, where symbols are represented as vectors, bridge the gap between the two approaches. We posit that cognitive architectures, if implemented in a vector‐space model, represent a useful, explanatory model of the internal representations of otherwise opaque neural architectures. Our proposed model, Holographic Declarative Memory (HDM), is a vector‐space model based on distributional semantics. HDM accounts for primacy and recency effects in free recall, the fan effect in recognition, probability judgments, and human performance on an iterated decision task. HDM provides a flexible, scalable alternative to symbolic cognitive architectures at a level of description that bridges symbolic, quantum, and neural models of cognition.
more »
« less
Holographic Declarative Memory: Using distributional semantics within ACT-R
We explore replacing the declarative memory system of the ACT-R cognitive architecture with a distributional semantics model. ACT-R is a widely used cognitive architecture, but scales poorly to big data applications and lacks a robust model for learning association strengths between stimuli. Distribu- tional semantics models can process millions of data points to infer semantic similarities from language data or to in- fer product recommendations from patterns of user prefer- ences. We demonstrate that a distributional semantics model can account for the primacy and recency effects in free recall, the fan effect in recognition, and human performance on it- erated decisions with initially unknown payoffs. The model we propose provides a flexible, scalable alternative to ACT- R’s declarative memory at a level of description that bridges symbolic, quantum, and neural models of cognition.
more »
« less
- PAR ID:
- 10067552
- Date Published:
- Journal Name:
- Proceedings of the Association for the Advancement of Artificial Intelligence Fall Symposium on A Standard Model of the Mind
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Computational models of distributional semantics can analyze a corpus to derive representations of word meanings in terms of each word’s relationship to all other words in the corpus. While these models are sensitive to topic (e.g., tiger and stripes) and synonymy (e.g., soar and fly), the models have limited sensitivity to part of speech (e.g., book and shirt are both nouns). By augmenting a holographic model of semantic memory with additional levels of representations, we present evidence that sensitivity to syntax is supported by exploiting associations between words at varying degrees of separation. We find that sensitivity to associations at three degrees of separation reinforces the relationships between words that share part-of-speech and improves the ability of the model to construct grammatical sentences. Our model provides evidence that semantics and syntax exist on a continuum and emerge from a unitary cognitive system.more » « less
-
Protocols model multiagent systems (MAS) by capturing the communications between its agents. Belief-Desire-Intention (BDI) architectures provide an attractive way for organizing an agent in terms of cognitive concepts. Current BDI approaches, however, lack adequate support for engineering protocol-based agents. We describe Argus, an approach that melds recent advances in flexible, declarative communication protocols with BDI architectures. For concreteness, we adopt Jason as an exemplar of the BDI paradigm and show how to support protocol-based reasoning in it. Specifically, Argus contributes (1) a novel architecture and formal operational semantics combining protocols and BDI; (2) a code generation-based programming model that guides the implementation of agents; and (3) integrity checking for incoming and outgoing messages that help ensure that the agents are well-behaved. The Argus conceptual architecture builds quite naturally on top of Jason. Thus, Argus enables building more flexible multiagent systems while using a BDI architecture than is currently possible.more » « less
-
Many computational models of reasoning rely on explicit relation representations to account for human cognitive capacities such as analogical reasoning. Relational luring, a phenomenon observed in recognition memory, has been interpreted as evidence that explicit relation representations also impact episodic memory; however, this assumption has not been rigorously assessed by computational modeling. We implemented an established model of recognition memory, the Generalized Context Model (GCM), as a framework for simulating human performance on an old/new recognition task that elicits relational luring. Within this basic theoretical framework, we compared representations based on explicit relations, lexical semantics (i.e., individual word meanings), and a combination of the two. We compared the same alternative representations as predictors of accuracy in solving explicit verbal analogies. In accord with previous work, we found that explicit relation representations are necessary for modeling analogical reasoning. In contrast, preliminary simulations incorporating model parameters optimized to fit human data reproduce relational luring using any of the alternative representations, including one based on non-relational lexical semantics. Further work on model comparisons is needed to examine the contributions of lexical semantics and relations on the luring effect in recognition memory.more » « less
-
Culbertson, J; Perfors, A; Rabagliati, H; Ramenzoni, V (Ed.)Many computational models of reasoning rely on explicit relation representations to account for human cognitive capacities such as analogical reasoning. Relational luring, a phenomenon observed in recognition memory, has been interpreted as evidence that explicit relation representations also impact episodic memory; however, this assumption has not been rigorously assessed by computational modeling. We implemented an established model of recognition memory, the Generalized Context Model (GCM), as a framework for simulating human performance on an old/new recognition task that elicits relational luring. Within this basic theoretical framework, we compared representations based on explicit relations, lexical semantics (i.e., individual word meanings), and a combination of the two. We compared the same alternative representations as predictors of accuracy in solving explicit verbal analogies. In accord with previous work, we found that explicit relation representations are necessary for modeling analogical reasoning. In contrast, preliminary simulations incorporating model parameters optimized to fit human data reproduce relational luring using any of the alternative representations, including one based on non-relational lexical semantics. Further work on model comparisons is needed to examine the contributions of lexical semantics and relations on the luring effect in recognition memory.more » « less
An official website of the United States government

