skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2120019

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract BackgroundSedentary behavior (SB) is a recognized risk factor for many chronic diseases. ActiGraph and activPAL are two commonly used wearable accelerometers in SB research. The former measures body movement and the latter measures body posture. The goal of the current study is to quantify the pattern and variation of movement (by ActiGraph activity counts) during activPAL-identified sitting events, and examine associations between patterns and health-related outcomes, such as systolic and diastolic blood pressure (SBP and DBP). MethodsThe current study included 314 overweight postmenopausal women, who were instructed to wear an activPAL (at thigh) and ActiGraph (at waist) simultaneously for 24 hours a day for a week under free-living conditions. ActiGraph and activPAL data were processed to obtain minute-level time-series outputs. Multilevel functional principal component analysis (MFPCA) was applied to minute-level ActiGraph activity counts within activPAL-identified sitting bouts to investigate variation in movement while sitting across subjects and days. The multilevel approach accounted for the nesting of days within subjects. ResultsAt least 90% of the overall variation of activity counts was explained by two subject-level principal components (PC) and six day-level PCs, hence dramatically reducing the dimensions from the original minute-level scale. The first subject-level PC captured patterns of fluctuation in movement during sitting, whereas the second subject-level PC delineated variation in movement during different lengths of sitting bouts: shorter (< 30 minutes), medium (30 -39 minutes) or longer (> 39 minute). The first subject-level PC scores showed positive association with DBP (standardized$$\hat{\beta }$$ β ^ : 2.041, standard error: 0.607, adjustedp= 0.007), which implied that lower activity counts (during sitting) were associated with higher DBP. ConclusionIn this work we implemented MFPCA to identify variation in movement patterns during sitting bouts, and showed that these patterns were associated with cardiovascular health. Unlike existing methods, MFPCA does not require pre-specified cut-points to define activity intensity, and thus offers a novel powerful statistical tool to elucidate variation in SB patterns and health. Trial registrationClinicalTrials.gov NCT03473145; Registered 22 March 2018;https://clinicaltrials.gov/ct2/show/NCT03473145; International Registered Report Identifier (IRRID): DERR1-10.2196/28684 
    more » « less
    Free, publicly-accessible full text available December 1, 2025
  2. SUMMARY Electrophysiology offers a high-resolution method for real-time measurement of neural activity. Longitudinal recordings from high-density microelectrode arrays (HD-MEAs) can be of considerable size for local storage and of substantial complexity for extracting neural features and network dynamics. Analysis is often demanding due to the need for multiple software tools with different runtime dependencies. To address these challenges, we developed an open-source cloud-based pipeline to store, analyze, and visualize neuronal electrophysiology recordings from HD-MEAs. This pipeline is dependency agnostic by utilizing cloud storage, cloud computing resources, and an Internet of Things messaging protocol. We containerized the services and algorithms to serve as scalable and flexible building blocks within the pipeline. In this paper, we applied this pipeline on two types of cultures, cortical organoids andex vivobrain slice recordings to show that this pipeline simplifies the data analysis process and facilitates understanding neuronal activity. 
    more » « less
    Free, publicly-accessible full text available November 14, 2025
  3. Abstract MotivationDriven by technological advances, the throughput and cost of mass spectrometry (MS) proteomics experiments have improved by orders of magnitude in recent decades. Spectral library searching is a common approach to annotating experimental mass spectra by matching them against large libraries of reference spectra corresponding to known peptides. An important disadvantage, however, is that only peptides included in the spectral library can be found, whereas novel peptides, such as those with unexpected post-translational modifications (PTMs), will remain unknown. Open modification searching (OMS) is an increasingly popular approach to annotate modified peptides based on partial matches against their unmodified counterparts. Unfortunately, this leads to very large search spaces and excessive runtimes, which is especially problematic considering the continuously increasing sizes of MS proteomics datasets. ResultsWe propose an OMS algorithm, called HOMS-TC, that fully exploits parallelism in the entire pipeline of spectral library searching. We designed a new highly parallel encoding method based on the principle of hyperdimensional computing to encode mass spectral data to hypervectors while minimizing information loss. This process can be easily parallelized since each dimension is calculated independently. HOMS-TC processes two stages of existing cascade search in parallel and selects the most similar spectra while considering PTMs. We accelerate HOMS-TC on NVIDIA’s tensor core units, which is emerging and readily available in the recent graphics processing unit (GPU). Our evaluation shows that HOMS-TC is 31× faster on average than alternative search engines and provides comparable accuracy to competing search tools. Availability and implementationHOMS-TC is freely available under the Apache 2.0 license as an open-source software project at https://github.com/tycheyoung/homs-tc. 
    more » « less
  4. Abstract The analysis of tissue cultures, particularly brain organoids, requires a sophisticated integration and coordination of multiple technologies for monitoring and measuring. We have developed an automated research platform enabling independent devices to achieve collaborative objectives for feedback-driven cell culture studies. Our approach enables continuous, communicative, non-invasive interactions within an Internet of Things (IoT) architecture among various sensing and actuation devices, achieving precisely timed control ofin vitrobiological experiments. The framework integrates microfluidics, electrophysiology, and imaging devices to maintain cerebral cortex organoids while measuring their neuronal activity. The organoids are cultured in custom, 3D-printed chambers affixed to commercial microelectrode arrays. Periodic feeding is achieved using programmable microfluidic pumps. We developed a computer vision fluid volume estimator used as feedback to rectify deviations in microfluidic perfusion during media feeding/aspiration cycles. We validated the system with a set of 7-day studies of mouse cerebral cortex organoids, comparing manual and automated protocols. The automated protocols were validated in maintaining robust neural activity throughout the experiment. The automated system enabled hourly electrophysiology recordings for the 7-day studies. Median neural unit firing rates increased for every sample and dynamic patterns of organoid firing rates were revealed by high-frequency recordings. Surprisingly, feeding did not affect firing rate. Furthermore, performing media exchange during a recording showed no acute effects on firing rate, enabling the use of this automated platform for reagent screening studies. 
    more » « less
  5. Abstract Neuronal firing sequences are thought to be the basic building blocks of neural coding and information broadcasting within the brain. However, when sequences emerge during neurodevelopment remains unknown. We demonstrate that structured firing sequences are present in spontaneous activity of human and murine brain organoids andex vivoneonatal brain slices from the murine somatosensory cortex. We observed a balance between temporally rigid and flexible firing patterns that are emergent phenomena in human and murine brain organoids and early postnatal murine somatosensory cortex, but not in primary dissociated cortical cultures. Our findings suggest that temporal sequences do not arise in an experience-dependent manner, but are rather constrained by an innate preconfigured architecture established during neurogenesis. These findings highlight the potential for brain organoids to further explore how exogenous inputs can be used to refine neuronal circuits and enable new studies into the genetic mechanisms that govern assembly of functional circuitry during early human brain development. 
    more » « less
  6. Abstract The Macquart relation describes the correlation between the dispersion measure (DM) of fast radio bursts (FRBs) and the redshiftzof their host galaxies. The scatter of the Macquart relation is sensitive to the distribution of baryons in the intergalactic medium including those ejected from galactic halos through feedback processes. The variance of the distribution in DMs from the cosmic web (DMcosmic) is parameterized by a fluctuation parameterF. In this work, we present a new measurement ofFusing 78 FRBs of which 21 have been localized to host galaxies. Our analysis simultaneously fits for the Hubble constantH0and the DM distribution due to the FRB host galaxy. We find that the fluctuation parameter is degenerate with these parameters, most notablyH0, and use a uniform prior onH0to measure log 10 F > 0.86 at the 3σconfidence interval and a new constraint on the Hubble constant H 0 = 85.3 8.1 + 9.4 km s 1 Mpc 1 . Using a synthetic sample of 100 localized FRBs, the constraint on the fluctuation parameter is improved by a factor of ∼2. Comparing ourFmeasurement to simulated predictions from cosmological simulation (IllustrisTNG), we find agreement between redshifts 0.4 <z andz< 2.0. However, atz< 0.4, the simulations underpredictF, which we attribute to the rapidly changing extragalactic DM excess distribution at low redshift. 
    more » « less
  7. Abstract Placing new sequences onto reference phylogenies is increasingly used for analyzing environmental samples, especially microbiomes. Existing placement methods assume that query sequences have evolved under specific models directly on the reference phylogeny. For example, they assume single-gene data (e.g., 16S rRNA amplicons) have evolved under the GTR model on a gene tree. Placement, however, often has a more ambitious goal: extending a (genome-wide) species tree given data from individual genes without knowing the evolutionary model. Addressing this challenging problem requires new directions. Here, we introduce Deep-learning Enabled Phylogenetic Placement (DEPP), an algorithm that learns to extend species trees using single genes without prespecified models. In simulations and on real data, we show that DEPP can match the accuracy of model-based methods without any prior knowledge of the model. We also show that DEPP can update the multilocus microbial tree-of-life with single genes with high accuracy. We further demonstrate that DEPP can combine 16S and metagenomic data onto a single tree, enabling community structure analyses that take advantage of both sources of data. [Deep learning; gene tree discordance; metagenomics; microbiome analyses; neural networks; phylogenetic placement.] 
    more » « less
  8. Free, publicly-accessible full text available July 18, 2026
  9. The increasing computational demand from growing data rates and complex machine learning (ML) algorithms in large-scale scientific experiments has driven the adoption of the Services for Optimized Network Inference on Coprocessors (SONIC) approach. SONIC accelerates ML inference by offloading it to local or remote coprocessors to optimize resource utilization. Leveraging its portability to different types of coprocessors, SONIC enhances data processing and model deployment efficiency for cutting-edge research in high energy physics (HEP) and multi-messenger astrophysics (MMA). We developed the SuperSONIC project, a scalable server infrastructure for SONIC, enabling the deployment of computationally intensive tasks to Kubernetes clusters equipped with graphics processing units (GPUs). Using NVIDIA Triton Inference Server, SuperSONIC decouples client workflows from server infrastructure, standardizing communication, optimizing throughput, load balancing, and monitoring. SuperSONIC has been successfully deployed for the CMS and ATLAS experiments at the CERN Large Hadron Collider (LHC), the IceCube Neutrino Observatory (IceCube), and the Laser Interferometer Gravitational-Wave Observatory (LIGO) and tested on Kubernetes clusters at Purdue University, the National Research Platform (NRP), and the University of Chicago. SuperSONIC addresses the challenges of the Cloud-native era by providing a reusable, configurable framework that enhances the efficiency of accelerator-based inference deployment across diverse scientific domains and industries. 
    more » « less
    Free, publicly-accessible full text available July 18, 2026