skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Evaluation of an Algorithmic‐Level Left‐Corner Parsing Account of Surprisal Effects
Abstract This article evaluates the predictions of an algorithmic‐level distributed associative memory model as it introduces, propagates, and resolves ambiguity, and compares it to the predictions of computational‐level parallel parsing models in which ambiguous analyses are accounted separately in discrete distributions. By superposing activation patterns that serve as cues to other activation patterns, the model is able to maintain multiple syntactically complex analyses superposed in a finite working memory, propagate this ambiguity through multiple intervening words, then resolve this ambiguity in a way that produces a measurable predictor that is proportional to the log conditional probability of the disambiguating word given its context, marginalizing over all remaining analyses. The results are indeed consistent in cases of complex structural ambiguity with computational‐level parallel parsing models producing this same probability as a predictor, which have been shown reliably to predict human reading times.  more » « less
Award ID(s):
1816891
PAR ID:
10557265
Author(s) / Creator(s):
 ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Cognitive Science
Volume:
48
Issue:
10
ISSN:
0364-0213
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Kaplan, C (Ed.)
    Abstract Transcription factors activate gene expression in development, homeostasis, and stress with DNA binding domains and activation domains. Although there exist excellent computational models for predicting DNA binding domains from protein sequence, models for predicting activation domains from protein sequence have lagged, particularly in metazoans. We recently developed a simple and accurate predictor of acidic activation domains on human transcription factors. Here, we show how the accuracy of this human predictor arises from the clustering of aromatic, leucine, and acidic residues, which together are necessary for acidic activation domain function. When we combine our predictor with the predictions of convolutional neural network (CNN) models trained in yeast, the intersection is more accurate than individual models, emphasizing that each approach carries orthogonal information. We synthesize these findings into a new set of activation domain predictions on human transcription factors. 
    more » « less
  2. Ensemble perception is a process by which we summarize complex scenes. Despite the importance of ensemble perception to everyday cognition, there are few computational models that provide a formal account of this process. Here we develop and test a model in which ensemble representations reflect the global sum of activation signals across all individual items. We leverage this set of minimal assumptions to formally connect a model of memory for individual items to ensembles. We compare our ensemble model against a set of alternative models in five experiments. Our approach uses performance on a visual memory task for individual items to generate zero-free-parameter predictions of interindividual and intraindividual differences in performance on an ensemble continuous-report task. Our top-down modelling approach formally unifies models of memory for individual items and ensembles and opens a venue for building and comparing models of distinct memory processes and representations. 
    more » « less
  3. We consider a parallel computational model, the Parallel Persistent Memory model, comprised of P processors, each with a fast local ephemeral memory of limited size, and sharing a large persistent memory. The model allows for each processor to fault at any time (with bounded probability), and possibly restart. When a processor faults, all of its state and local ephemeral memory is lost, but the persistent memory remains. This model is motivated by upcoming non-volatile memories that are nearly as fast as existing random access memory, are accessible at the granularity of cache lines, and have the capability of surviving power outages. It is further motivated by the observation that in large parallel systems, failure of processors and their caches is not unusual. We present several results for the model, using an approach that breaks a computation into capsules, each of which can be safely run multiple times. For the single-processor version we describe how to simulate any program in the RAM, the external memory model, or the ideal cache model with an expected constant factor overhead. For the multiprocessor version we describe how to efficiently implement a work-stealing scheduler within the model such that it handles both soft faults, with a processor restarting, and hard faults, with a processor permanently failing. For any multithreaded fork-join computation that is race free, write-after-read conflict free and has W work, D depth, and C maximum capsule work in the absence of faults, the scheduler guarantees a time bound on the model of O(W/P_A+ (DP/P_A ) log_{1/(Cf )} W) in expectation, where P is the maximum number of processors, P_A is the average number, and f ≤ 1/(2C) is the probability a processor faults between successive persistent memory accesses. Within the model, and using the proposed methods, we develop efficient algorithms for parallel prefix sums, merging, sorting, and matrix multiply. 
    more » « less
  4. Abstract Battery lifetime and reliability depend on accurate state-of-health (SOH) estimation, while complex degradation mechanisms and varying operating conditions strengthen this challenge. This study presents two physics-informed neural network (PINN) configurations, PINN-parallel and PINN-series, designed to improve SOH prediction by combining an equivalent circuit model (ECM) with a long short-term memory (LSTM) network. PINN-parallel process inputs data through parallel ECM and LSTM modules and combines their outputs for SOH estimation. On the other hand, the PINN-series uses a sequential approach that feeds ECM-derived parameters into the LSTM network to supplement temporal data analysis with physics information. Both models utilize easily accessible voltage, current, and temperature data that match realistic battery monitoring constraints. Experimental evaluations show that the PINN-series outperforms the PINN-parallel and the baseline LSTM model in accuracy and robustness. It also adapts well to different input conditions. This demonstrates that the simulated battery dynamic states from ECM increase the LSTM's ability to capture degradation patterns and improve the model's ability to explain complex battery behavior. However, a trade-off between the robustness and training efficiency of PINNs is identified. The research outcomes show the potential of PINN models (particularly the PINN-series) in advancing battery management systems, although they require considerable computational resources. 
    more » « less
  5. Abstract We demonstrate that the key components of cognitive architectures (declarative and procedural memory) and their key capabilities (learning, memory retrieval, probability judgment, and utility estimation) can be implemented as algebraic operations on vectors and tensors in a high‐dimensional space using a distributional semantics model. High‐dimensional vector spaces underlie the success of modern machine learning techniques based on deep learning. However, while neural networks have an impressive ability to process data to find patterns, they do not typically model high‐level cognition, and it is often unclear how they work. Symbolic cognitive architectures can capture the complexities of high‐level cognition and provide human‐readable, explainable models, but scale poorly to naturalistic, non‐symbolic, or big data. Vector‐symbolic architectures, where symbols are represented as vectors, bridge the gap between the two approaches. We posit that cognitive architectures, if implemented in a vector‐space model, represent a useful, explanatory model of the internal representations of otherwise opaque neural architectures. Our proposed model, Holographic Declarative Memory (HDM), is a vector‐space model based on distributional semantics. HDM accounts for primacy and recency effects in free recall, the fan effect in recognition, probability judgments, and human performance on an iterated decision task. HDM provides a flexible, scalable alternative to symbolic cognitive architectures at a level of description that bridges symbolic, quantum, and neural models of cognition. 
    more » « less