We describe a parser of English effectuated by biologically plausible neurons and synapses, and implemented through the Assembly Calculus, a recently proposed computational framework for cognitive function. We demonstrate that this device is capable of correctly parsing reasonably nontrivial sentences.1 While our experiments entail rather simple sentences in English, our results suggest that the parser can be extended beyond what we have implemented, to several directions encompassing much of language. For example, we present a simple Russian version of the parser, and discuss how to handle recursion, embedding, and polysemy.
more »
« less
Towards a Biologically-Plausible Computational Model of Human Language Cognition [Towards a Biologically-Plausible Computational Model of Human Language Cognition]
The biolinguistics approach aims to construct a coherent and biologically plausible model/theory of human language as a computational system coded in the brain that for each individual recursively generates an infinite array of hierarchically structured expressions interpreted at the interfaces for thought and externalization. Language is a recent development in human evolution, is acquired reflexively from impoverished data, and shares common properties through the species in spite of individual diversity. Universal Grammar, a genuine explanation of language, must meet these apparently contradictory requirements. The Strong Minimalist Thesis (SMT) proposes that all phenomena of language have a principled account rooted in efficient computation, which makes language a perfect solution to interface conditions. LLMs, albeit their remarkable performance, cannot achieve the explanatory adequacy necessary for a language competence model. We implemented a computer model assuming these challenges, only using language-specific operations, relations, and procedures satisfying SMT. As a plausible model of human language, the implementation can put to test cutting-edge syntactic theory within the generative enterprise. Successful derivations obtained through the model signal the feasibility of the minimalist framework, shed light on specific proposals on the processing of structural ambiguity, and help to explore fundamental questions about the nature of the Workspace.
more »
« less
- Award ID(s):
- 2219712
- PAR ID:
- 10559011
- Publisher / Repository:
- SCITEPRESS - Science and Technology Publications
- Date Published:
- ISSN:
- 2184433X 21843589
- ISBN:
- 978-989-758-680-4
- Page Range / eLocation ID:
- 1108 to 1118
- Subject(s) / Keyword(s):
- Strong Minimalist Thesis Cognitive Modeling Computational Linguistics Explainable Artificial Intelligence
- Format(s):
- Medium: X
- Location:
- Rome, Italy
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Language-guided human motion synthesis has been a challenging task due to the inherent complexity and diversity of human behaviors. Previous methods face limitations in generalization to novel actions, often resulting in unrealistic or incoherent motion sequences. In this paper, we propose ATOM (ATomic mOtion Modeling) to mitigate this problem, by decomposing actions into atomic actions, and employing a curriculum learning strategy to learn atomic action composition. First, we disentangle complex human motions into a set of atomic actions during learning, and then assemble novel actions using the learned atomic actions, which offers better adaptability to new actions. Moreover, we introduce a curriculum learning training strategy that leverages masked motion modeling with a gradual increase in the mask ratio, and thus facilitates atomic action assembly. This approach mitigates the overfitting problem commonly encountered in previous methods while enforcing the model to learn better motion representations. We demonstrate the effectiveness of ATOM through extensive experiments, including text-to-motion and action-to-motion synthesis tasks. We further illustrate its superiority in synthesizing plausible and coherent text-guided human motion sequences.more » « less
-
Computational phenotyping has emerged as a powerful tool for characterizing individual variability across a variety of cognitive domains. An individual’s computational phenotype is defined as a set of mechanistically interpretable parameters obtained from fitting computational models to behavioural data. However, the interpretation of these parameters hinges critically on their psychometric properties, which are rarely studied. To identify the sources governing the temporal variability of the computational phenotype, we carried out a 12-week longitudinal study using a battery of seven tasks that measure aspects of human learning, memory, perception and decision making. To examine the influence of state effects, each week, participants provided reports tracking their mood, habits and daily activities. We developed a dynamic computational phenotyping framework, which allowed us to tease apart the time-varying effects of practice and internal states such as affective valence and arousal. Our results show that many phenotype dimensions covary with practice and affective factors, indicating that what appears to be unreliability may reflect previously unmeasured structure. These results support a fundamentally dynamic understanding of cognitive variability within an individual.more » « less
-
Abstract Backpropagation is widely used to train artificial neural networks, but its relationship to synaptic plasticity in the brain is unknown. Some biological models of backpropagation rely on feedback projections that are symmetric with feedforward connections, but experiments do not corroborate the existence of such symmetric backward connectivity. Random feedback alignment offers an alternative model in which errors are propagated backward through fixed, random backward connections. This approach successfully trains shallow models, but learns slowly and does not perform well with deeper models or online learning. In this study, we develop a meta-learning approach to discover interpretable, biologically plausible plasticity rules that improve online learning performance with fixed random feedback connections. The resulting plasticity rules show improved online training of deep models in the low data regime. Our results highlight the potential of meta-learning to discover effective, interpretable learning rules satisfying biological constraints.more » « less
-
Asynchronous event-driven computation and communication using spikes facilitate the realization of spiking neural networks (SNN) to be massively parallel, extremely energy efficient and highly robust on specialized neuromorphic hardware. However, the lack of a unified robust learning algorithm limits the SNN to shallow networks with low accuracies. Artificial neural networks (ANN), however, have the backpropagation algorithm which can utilize gradient descent to train networks which are locally robust universal function approximators. But backpropagation algorithm is neither biologically plausible nor neuromorphic implementation friendly because it requires: 1) separate backward and forward passes, 2) differentiable neurons, 3) high-precision propagated errors, 4) coherent copy of weight matrices at feedforward weights and the backward pass, and 5) non-local weight update. Thus, we propose an approximation of the backpropagation algorithm completely with spiking neurons and extend it to a local weight update rule which resembles a biologically plausible learning rule spike-timing-dependent plasticity (STDP). This will enable error propagation through spiking neurons for a more biologically plausible and neuromorphic implementation friendly backpropagation algorithm for SNNs. We test the proposed algorithm on various traditional and non-traditional benchmarks with competitive results.more » « less
An official website of the United States government

