skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Thursday, March 12 until 2:00 AM ET on Friday, March 13 due to maintenance. We apologize for the inconvenience.


Title: Temporal dynamics of the multi-omic response to endurance exercise training
Abstract Regular exercise promotes whole-body health and prevents disease, but the underlying molecular mechanisms are incompletely understood1–3. Here, the Molecular Transducers of Physical Activity Consortium4profiled the temporal transcriptome, proteome, metabolome, lipidome, phosphoproteome, acetylproteome, ubiquitylproteome, epigenome and immunome in whole blood, plasma and 18 solid tissues in male and femaleRattus norvegicusover eight weeks of endurance exercise training. The resulting data compendium encompasses 9,466 assays across 19 tissues, 25 molecular platforms and 4 training time points. Thousands of shared and tissue-specific molecular alterations were identified, with sex differences found in multiple tissues. Temporal multi-omic and multi-tissue analyses revealed expansive biological insights into the adaptive responses to endurance training, including widespread regulation of immune, metabolic, stress response and mitochondrial pathways. Many changes were relevant to human health, including non-alcoholic fatty liver disease, inflammatory bowel disease, cardiovascular health and tissue injury and recovery. The data and analyses presented in this study will serve as valuable resources for understanding and exploring the multi-tissue molecular effects of endurance training and are provided in a public repository (https://motrpac-data.org/).  more » « less
Award ID(s):
2238125
PAR ID:
10627947
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; « less
Corporate Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Nature
Date Published:
Journal Name:
Nature
Volume:
629
Issue:
8010
ISSN:
0028-0836
Page Range / eLocation ID:
174 to 183
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Elevated seismic noise for moderate‐size earthquakes recorded at teleseismic distances has limited our ability to see their complexity. We develop a machine‐learning‐based algorithm to separate noise and earthquake signals that overlap in frequency. The multi‐task encoder‐decoder model is built around a kernel pre‐trained on local (e.g., short distances) earthquake data (Yin et al., 2022,https://doi.org/10.1093/gji/ggac290) and is modified by continued learning with high‐quality teleseismic data. We denoise teleseismic P waves of deep Mw5.0+ earthquakes and use the clean P waves to estimate source characteristics with reduced uncertainties of these understudied earthquakes. We find a scaling of moment and duration to beM0 ≃ τ4, and a resulting strong scaling of stress drop and radiated energy with magnitude ( and ). The median radiation efficiency is 5%, a low value compared to crustal earthquakes. Overall, we show that deep earthquakes have weak rupture directivity and few subevents, suggesting a simple model of a circular crack with radial rupture propagation is appropriate. When accounting for their respective scaling with earthquake size, we find no systematic depth variations of duration, stress drop, or radiated energy within the 100–700 km depth range. Our study supports the findings of Poli and Prieto (2016,https://doi.org/10.1002/2016jb013521) with a doubled amount of earthquakes investigated and with earthquakes of lower magnitudes. 
    more » « less
  2. Abstract Type Ia supernova explosions (SN Ia) are fundamental sources of elements for the chemical evolution of galaxies. They efficiently produce intermediate-mass (withZbetween 11 and 20) and iron group elements—for example, about 70% of the solar iron is expected to be made by SN Ia. In this work, we calculate complete abundance yields for 39 models of SN Ia explosions, based on three progenitors—a 1.4Mdeflagration detonation model, a 1.0Mdouble detonation model, and a 0.8Mdouble detonation model—and 13 metallicities, with22Ne mass fractions of 0, 1 × 10−7, 1 × 10−6, 1 × 10−5, 1 × 10−4, 1 × 10−3, 2 × 10−3, 5 × 10−3, 1 × 10−2, 1.4 × 10−2, 5 × 10−2, and 0.1, respectively. Nucleosynthesis calculations are done using the NuGrid suite of codes, using a consistent nuclear reaction network between the models. Complete tables with yields and production factors are provided online at Zenodo:Yields (https://doi.org/10.5281/zenodo.8060323). We discuss the main properties of our yields in light of the present understanding of SN Ia nucleosynthesis, depending on different progenitor mass and composition. Finally, we compare our results with a number of relevant models from the literature. 
    more » « less
  3. Abstract BackgroundComputational cell type deconvolution enables the estimation of cell type abundance from bulk tissues and is important for understanding tissue microenviroment, especially in tumor tissues. With rapid development of deconvolution methods, many benchmarking studies have been published aiming for a comprehensive evaluation for these methods. Benchmarking studies rely on cell-type resolved single-cell RNA-seq data to create simulated pseudobulk datasets by adding individual cells-types in controlled proportions. ResultsIn our work, we show that the standard application of this approach, which uses randomly selected single cells, regardless of the intrinsic difference between them, generates synthetic bulk expression values that lack appropriate biological variance. We demonstrate why and how the current bulk simulation pipeline with random cells is unrealistic and propose a heterogeneous simulation strategy as a solution. The heterogeneously simulated bulk samples match up with the variance observed in real bulk datasets and therefore provide concrete benefits for benchmarking in several ways. We demonstrate that conceptual classes of deconvolution methods differ dramatically in their robustness to heterogeneity with reference-free methods performing particularly poorly. For regression-based methods, the heterogeneous simulation provides an explicit framework to disentangle the contributions of reference construction and regression methods to performance. Finally, we perform an extensive benchmark of diverse methods across eight different datasets and find BayesPrism and a hybrid MuSiC/CIBERSORTx approach to be the top performers. ConclusionsOur heterogeneous bulk simulation method and the entire benchmarking framework is implemented in a user friendly packagehttps://github.com/humengying0907/deconvBenchmarkingandhttps://doi.org/10.5281/zenodo.8206516, enabling further developments in deconvolution methods. 
    more » « less
  4. Abstract The Caribbean & Mesoamerica Biogeochemical Isotope Overview (CAMBIO) is an archaeological data community designed to integrate published biogeochemical data from the Caribbean, Mesoamerica, and southern Central America to address questions about dynamic interactions among humans, animals, and the environment in the region over the past 10,000 years. Here we present the CAMBIO human dataset, which consists of more than 16,000 isotopic measurements from human skeletal tissue samples (δ13C, δ15N, δ34S, δ18O,87Sr/86Sr,206/204Pb,207/204Pb,208/204Pb,207/206Pb) from 290 archaeological sites dating between 7000 BC to modern times. The open-access dataset also includes detailed chronological, contextual, and laboratory/sample preparation information for each measurement. The collated data are deposited on the open-access CAMBIO data community via the Pandora Initiative data platform (https://pandoradata.earth/organization/cambio). 
    more » « less
  5. Abstract We present an expansion of FLEET, a machine-learning algorithm optimized to select transients that are most likely tidal disruption events (TDEs). FLEET is based on a random forest algorithm trained on both the light curves and host galaxy information of 4779 spectroscopically classified transients. We find that for transients with a probability of being a TDE,P(TDE) > 0.5, we can successfully recover TDEs with ≈40% completeness and ≈30% purity when using their first 20 days of photometry or a similar completeness and ≈50% purity when including 40 days of photometry, an improvement of almost 2 orders of magnitude compared to random selection. Alternatively, we can recover TDEs with a maximum purity of ≈80% and a completeness of ≈30% when considering only transients withP(TDE) > 0.8. We explore the use of FLEET for future time-domain surveys such as the Legacy Survey of Space and Time on the Vera C. Rubin Observatory (Rubin) and the Nancy Grace Roman Space Telescope (Roman). We estimate that ∼104well-observed TDEs could be discovered every year by Rubin and ∼200 TDEs by Roman. Finally, we run FLEET on the TDEs from our Rubin survey simulation and find that we can recover ∼30% of them at redshiftz< 0.5 withP(TDE) > 0.5, or ∼3000 TDEs yr–1that FLEET could uncover from the Rubin stream. We have demonstrated that we will be able to run FLEET on Rubin photometry as soon as this survey begins. FLEET is provided as an open source package on GitHub: https://github.com/gmzsebastian/FLEET. 
    more » « less