skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2117429

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Arousal state is regulated by subcortical neuromodulatory nuclei, such as locus coeruleus, which send wide-reaching projections to cortex. Whether higher-order cortical regions have the capacity to recruit neuromodulatory systems to aid cognition is unclear. Here, we hypothesized that select cortical regions activate the arousal system, which, in turn, modulates large-scale brain activity, creating a functional circuit predicting cognitive ability. We utilized the Human Connectome Project 7T functional magnetic resonance imaging dataset (n = 149), acquired at rest with simultaneous eye tracking, along with extensive cognitive assessment for each subject. First, we discovered select frontoparietal cortical regions that drive large-scale spontaneous brain activity specifically via engaging the arousal system. Second, we show that the functionality of the arousal circuit driven by bilateral posterior cingulate cortex (associated with the default mode network) predicts subjects’ cognitive abilities. This suggests that a cortical region that is typically associated with self-referential processing supports cognition by regulating the arousal system. 
    more » « less
  2. Abstract The theorems of density functional theory (DFT) establish bijective maps between the local external potential of a many-body system and its electron density, wavefunction and, therefore, one-particle reduced density matrix. Building on this foundation, we show that machine learning models based on the one-electron reduced density matrix can be used to generate surrogate electronic structure methods. We generate surrogates of local and hybrid DFT, Hartree-Fock and full configuration interaction theories for systems ranging from small molecules such as water to more complex compounds like benzene and propanol. The surrogate models use the one-electron reduced density matrix as the central quantity to be learned. From the predicted density matrices, we show that either standard quantum chemistry or a second machine-learning model can be used to compute molecular observables, energies, and atomic forces. The surrogate models can generate essentially anything that a standard electronic structure method can, ranging from band gaps and Kohn-Sham orbitals to energy-conserving ab-initio molecular dynamics simulations and infrared spectra, which account for anharmonicity and thermal effects, without the need to employ computationally expensive algorithms such as self-consistent field theory. The algorithms are packaged in an efficient and easy to use Python code, QMLearn, accessible on popular platforms. 
    more » « less
  3. Kay, Kendrick (Ed.)
    A central goal of neuroscience is to understand how function-relevant brain activations are generated. Here we test the hypothesis that function-relevant brain activations are generated primarily by distributed network flows. We focused on visual processing in human cortex, given the long-standing literature supporting the functional relevance of brain activations in visual cortex regions exhibiting visual category selectivity. We began by using fMRI data from N = 352 human participants to identify category-specific responses in visual cortex for images of faces, places, body parts, and tools. We then systematically tested the hypothesis that distributed network flows can generate these localized visual category selective responses. This was accomplished using a recently developed approach for simulating – in a highly empirically constrained manner – the generation of task-evoked brain activations by modeling activity flowing over intrinsic brain connections. We next tested refinements to our hypothesis, focusing on how stimulus-driven network interactions initialized in V1 generate downstream visual category selectivity. We found evidence that network flows directly from V1 were sufficient for generating visual category selectivity, but that additional, globally distributed (whole-cortex) network flows increased category selectivity further. Using null network architectures we also found that each region’s unique intrinsic “connectivity fingerprint” was key to the generation of category selectivity. These results generalized across regions associated with all four visual categories tested (bodies, faces, places, and tools), and provide evidence that the human brain’s intrinsic network organization plays a prominent role in the generation of functionally relevant, localized responses. 
    more » « less
  4. Representational geometry and connectivity-based studies offer complementary insights into neural information processing, but it is unclear how representations and networks interact to generate neural information. Using a multi-task fMRI dataset, we investigate the role of intrinsic connectivity in shaping diverse representational geometries across the human cortex. Activity flow modeling, which generates neural activity based on connectivity-weighted propagation from other regions, successfully recreated similarity structure and a compression-then-expansion pattern of task representation dimensionality. We introduce a novel measure, convergence, quantifying the degree to which connectivity converges onto target regions. As hypothesized, convergence corresponded with compression of representations and helped explain the observed compression-then-expansion pattern of task representation dimensionality along the cortical hierarchy. These results underscore the generative role of intrinsic connectivity in sculpting representational geometries and suggest that structured connectivity properties, such as convergence, contribute to representational transformations. By bridging representational geometry and connectivity-based frameworks, this work offers a more unified understanding of neural information processing and the computational relevance of brain architecture. 
    more » « less
  5. Our ability to overcome habitual responses in favor of goal-driven novel responses depends on frontoparietal cognitive control networks (CCNs). Recent and ongoing work is revealing the brain network and information processes that allow CCNs to generate cognitive flexibility. First, working memory processes necessary for flexible maintenance and manipulation of goal-relevant representations were recently found to depend on short-term network plasticity (in contrast to persistent activity) within CCN regions. Second, compositional (i.e. abstract and reusable) rule representations maintained within CCNs have been found to reroute network activity flows from stimulus to response, enabling flexible behavior. Together, these findings suggest cognitive flexibility is enhanced by CCN-coordinated network mechanisms, utilizing compositional reuse of neural representations and network flows to flexibly accomplish task goals. 
    more » « less
  6. For an electronic system, given a mean field method and a distribution of orbital occupation numbers that are close to the natural occupations of the correlated system, we provide formal evidence and computational support to the hypothesis that the entropy (or more precisely −σS, where σ is a parameter and S is the entropy) of such a distribution is a good approximation to the correlation energy. Underpinning the formal evidence are mild assumptions: the correlation energy is strictly a functional of the occupation numbers, and the occupation numbers derive from an invertible distribution. Computational support centers around employing different mean field methods and occupation number distributions (Fermi–Dirac, Gaussian, and linear), for which our claims are verified for a series of pilot calculations involving bond breaking and chemical reactions. This work establishes a formal footing for those methods employing entropy as a measure of electronic correlation energy (e.g., i-DMFT [Wang and Baerends, Phys. Rev. Lett. 128, 013001 (2022)] and TAO-DFT [J.-D. Chai, J. Chem. Phys. 136, 154104 (2012)]) and sets the stage for the widespread use of entropy functionals for approximating the (static) electronic correlation. 
    more » « less