skip to main content


Title: Decoding of EEG Signals Shows No Evidence of a Neural Signature for Subitizing in Sequential Numerosity
Abstract Numerosity perception is largely governed by two mechanisms. The first so-called subitizing system allows one to enumerate a small number of items (up to three or four) without error. The second system allows only an approximate estimation of larger numerosities. Here, we investigate the neural bases of the two systems using sequentially presented numerosity. Sequential numerosity (i.e., the number of events presented over time) starts as a subitizable set but may eventually transition into a larger numerosity in the approximate estimation range, thus offering a unique opportunity to investigate the neural signature of that transition point, or subitizing boundary. If sequential numerosity is encoded by two distinct perceptual mechanisms (i.e., for subitizing and approximate estimation), neural representations of the sequentially presented items crossing the subitizing boundary should be sharply distinguishable. In contrast, if sequential numerosity is encoded by a single perceptual mechanism for all numerosities and subitizing is achieved through an external postperceptual mechanism, no such differences in the neural representations should indicate the subitizing boundary. Using the high temporal resolution of the EEG technique incorporating a multivariate decoding analysis, we found results consistent with the latter hypothesis: No sharp representational distinctions were observed between items across the subitizing boundary, which is in contrast with the behavioral pattern of subitizing. The results support a single perceptual mechanism encoding sequential numerosities, whereas subitizing may be supported by a postperceptual attentional mechanism operating at a later processing stage.  more » « less
Award ID(s):
1654089
NSF-PAR ID:
10312093
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Journal of Cognitive Neuroscience
ISSN:
1530-8898
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Number sense, the ability to decipher quantity, forms the foundation for mathematical cognition. How number sense emerges with learning is, however, not known. Here we use a biologically-inspired neural architecture comprising cortical layers V1, V2, V3, and intraparietal sulcus (IPS) to investigate how neural representations change with numerosity training. Learning dramatically reorganized neuronal tuning properties at both the single unit and population levels, resulting in the emergence of sharply-tuned representations of numerosity in the IPS layer. Ablation analysis revealed that spontaneous number neurons observed prior to learning were not critical to formation of number representations post-learning. Crucially, multidimensional scaling of population responses revealed the emergence of absolute and relative magnitude representations of quantity, including mid-point anchoring. These learnt representations may underlie changes from logarithmic to cyclic and linear mental number lines that are characteristic of number sense development in humans. Our findings elucidate mechanisms by which learning builds novel representations supporting number sense. 
    more » « less
  2. Numerosity estimation performance (e.g., how accurate, consistent, or proportionally spaced (linear) numerosity-numeral mappings are) has previously been associated with math competence. However, the specific mechanisms that underlie such a relation is unknown. One possible mechanism is the mapping process between numerical sets and symbolic numbers (e.g., Arabic numerals). The current study examined two hypothesized mechanisms of numerosity-numeral mappings (item-based “associative” and holistic “structural” mapping) and their roles in the estimation-and-math relation. Specifically, mappings for small numbers (e.g., 1–10) are thought to be associative and resistant to calibration (e.g., feedback on accuracy of esti- mates), whereas holistic “structural” mapping for larger numbers (e.g., beyond 10) may be supported by flexibly aligning a numeral “response grid” (akin to a ruler) to an analog “mental number line” upon calibration. In 57 adults, we used pre- and post-calibration estimates to measure the range of continuous associative mappings among small numbers (e.g., a base range of associative mappings from 1 to 10), and obtained measures of math competence and delayed multiple-choice strategy reports. Consistent with previous research, uncalibrated estimation performance correlated with calculation competence, controlling for reading fluency and working memory. However, having a higher base range of associative mappings was not related to estimation performance or any math competence measures. Critically, discontinuity in calibration effects was typi- cal at the individual level, which calls into question the nature of “holistic structural mapping”. A parsimonious explanation to integrate previous and current findings is that estimation performance is likely optimized by dynamically constructing numerosity-numeral mappings through the use of multiple strategies from trial to trial. 
    more » « less
  3. Tworzydlo, W and (Ed.)
    Sex determination and sexual development are highly diverse and controlled by mechanisms that are extremely labile. While dioecy (separate male and female functions) is the norm for most animals, hermaphroditism (both male and female functions within a single body) is phylogenetically widespread. Much of our current understanding of sexual development comes from a small number of model systems, limiting our ability to make broader conclusions about the evolution of sexual diversity. We present the calyptraeid gastropods as a model for the study of the evolution of sex determination in a sequentially hermaphroditic system. Calyptraeid gastropods, a group of sedentary, filter-feeding marine snails, are sequential hermaphrodites that change sex from male to female during their life span (protandry). This transition includes resorption of the penis and the elaboration of female genitalia, in addition to shifting from production of spermatocytes to oocytes. This transition is typically under environmental control and frequently mediated by social interactions. Males in contact with females delay sex change to transition at larger sizes, while isolated males transition more rapidly and at smaller sizes. This phenomenon has been known for over a century; however, the mechanisms that control the switch from male to female are poorly understood. We review here our current understanding of sexual development and sex determination in the calyptraeid gastropods and other molluscs, highlighting our current understanding of factors implicated in the timing of sex change and the potential mechanisms. We also consider the embryonic origins and earliest expression of the germ line and the effects of environmental contaminants on sexual development. 
    more » « less
  4. Many species of animals exhibit an intuitive sense of number, suggesting a fundamental neural mechanism for representing numerosity in a visual scene. Recent empirical studies demonstrate that early feedforward visual responses are sensitive to numerosity of a dot array but substantially less so to continuous dimensions orthogonal to numerosity, such as size and spacing of the dots. However, the mechanisms that extract numerosity are unknown. Here, we identified the core neurocomputational principles underlying these effects: (1) center-surround contrast filters; (2) at different spatial scales; with (3) divisive normalization across network units. In an untrained computational model, these principles eliminated sensitivity to size and spacing, making numerosity the main determinant of the neuronal response magnitude. Moreover, a model implementation of these principles explained both well-known and relatively novel illusions of numerosity perception across space and time. This supports the conclusion that the neural structures and feedforward processes that encode numerosity naturally produce visual illusions of numerosity. Taken together, these results identify a set of neurocomputational properties that gives rise to the ubiquity of the number sense in the animal kingdom. 
    more » « less
  5. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should produce identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues. 
    more » « less