skip to main content


Search for: All records

Award ID contains: 1919691

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Understanding near infrared light propagation in tissue is vital for designing next generation optical brain imaging devices. Monte Carlo (MC) simulations provide a controlled mechanism to characterize and evaluate contributions of diverse near infrared spectroscopy (NIRS) sensor configurations and parameters. In this study, we developed a multilayer adult digital head model under both healthy and clinical settings and assessed light‐tissue interaction through MC simulations in terms of partial differential pathlength, mean total optical pathlength, diffuse reflectance, detector light intensity and spatial sensitivity profile of optical measurements. The model incorporated four layers: scalp, skull, cerebrospinal‐fluid and cerebral cortex with and without a customizable lesion for modeling hematoma of different sizes and depths. The effect of source‐detector separation (SDS) on optical measurements' sensitivity to brain tissue was investigated. Results from 1330 separate simulations [(4 lesion volumes × 4 lesion depths for clinical +3 healthy settings) × 7 SDS × 10 simulation = 1330)] each with 100 million photons indicated that selection of SDS is critical to acquire optimal measurements from the brain and recommended SDS to be 25 to 35 mm depending on the wavelengths to obtain optical monitoring of the adult brain function. The findings here can guide the design of future NIRS probes for functional neuroimaging and clinical diagnostic systems.

     
    more » « less
  2. Free, publicly-accessible full text available March 1, 2025
  3. It is well established that amyloid β-protein (Aβ) self-assembly is involved in triggering of Alzheimer's disease. On the other hand, evidence of physiological function of Aβ interacting with lipids has only begun to emerge. Details of Aβ–lipid interactions, which may underlie physiological and pathological activities of Aβ, are not well understood. Here, the effects of salt and 1,2-dimyristoyl- sn-glycero -3-phosphocholine (DMPC) lipids on conformational dynamics of Aβ42 monomer in water are examined by all-atom molecular dynamics (MD). We acquired six sets of 250 ns long MD trajectories for each of the three lipid concentrations (0, 27, and 109 mM) in the absence and presence of 150 mM salt. Ten replica trajectories per set are used to enhance sampling of Aβ42 conformational space. We show that salt facilitates long-range tertiary contacts in Aβ42, resulting in more compact Aβ42 conformations. By contrast, addition of lipids results in lipid-concentration dependent Aβ42 unfolding concomitant with enhanced stability of the turn in the A21–A30 region. At the high lipid concentration, salt enables the N-terminal region of Aβ42 to form long-range tertiary contacts and interact with lipids, which results in formation of a parallel β-strand. Aβ42 forms stable lipid–protein complexes whereby the protein is adhered to the lipid cluster rather than embedded into it. We propose that the inability of Aβ42 monomer to get embedded into the lipid cluster may be important for facilitating repair of leaks in the blood-brain barrier without penetrating and damaging cellular membranes. 
    more » « less
  4. Efficiently and accurately identifying which microbes are present in a biological sample is important to medicine and biology. For example, in medicine, microbe identification allows doctors to better diagnose diseases. Two questions are essential to metagenomic analysis (the analysis of a random sampling of DNA in a patient/environment sample): How to accurately identify the microbes in samples and how to efficiently update the taxonomic classifier as new microbe genomes are sequenced and added to the reference database. To investigate how classifiers change as they train on more knowledge, we made sub-databases composed of genomes that existed in past years that served as “snapshots in time” (1999–2020) of the NCBI reference genome database. We evaluated two classification methods, Kraken 2 and CLARK with these snapshots using a real, experimental metagenomic sample from a human gut. This allowed us to measure how much of a real sample could confidently classify using these methods and as the database grows. Despite not knowing the ground truth, we could measure the concordance between methods and between years of the database within each method using a Bray-Curtis distance. In addition, we also recorded the training times of the classifiers for each snapshot. For all data for Kraken 2, we observed that as more genomes were added, more microbes from the sample were classified. CLARK had a similar trend, but in the final year, this trend reversed with the microbial variation and less unique k-mers. Also, both classifiers, while having different ways of training, generally are linear in time - but Kraken 2 has a significantly lower slope in scaling to more data. 
    more » « less
  5. A key challenge for artificial intelligence in the legal field is to determine from the text of a party’s litigation brief whether, and why, it will succeed or fail. This paper shows a proof-of-concept test case from the United States: predicting outcomes of post-grant inter partes review (IPR) proceedings for invalidating patents. The objectives are to compare decision-tree and deep learning methods, validate interpretability methods, and demonstrate outcome prediction based on party briefs. Specifically, this study compares and validates two distinct approaches: (1) representing documents with term frequency inverse document frequency (TF-IDF), training XGBoost gradient-boosted decision-tree models, and using SHAP for interpretation. (2) Deep learning of document text in context, using convolutional neural networks (CNN) with attention, and comparing LIME and attention visualization for interpretability. The methods are validated on the task of automatically determining case outcomes from unstructured written decision opinions, and then used to predict trial institution or denial based on the patent owner’s preliminary response brief. The results show how interpretable deep learning architecture classifies successful/unsuccessful response briefs on temporally separated training and test sets. More accurate prediction remains challenging, likely due to the fact-specific, technical nature of patent cases and changes in applicable law and jurisprudence over time. 
    more » « less
  6. Abstract Evaluating metagenomic software is key for optimizing metagenome interpretation and focus of the Initiative for the Critical Assessment of Metagenome Interpretation (CAMI). The CAMI II challenge engaged the community to assess methods on realistic and complex datasets with long- and short-read sequences, created computationally from around 1,700 new and known genomes, as well as 600 new plasmids and viruses. Here we analyze 5,002 results by 76 program versions. Substantial improvements were seen in assembly, some due to long-read data. Related strains still were challenging for assembly and genome recovery through binning, as was assembly quality for the latter. Profilers markedly matured, with taxon profilers and binners excelling at higher bacterial ranks, but underperforming for viruses and Archaea. Clinical pathogen detection results revealed a need to improve reproducibility. Runtime and memory usage analyses identified efficient programs, including top performers with other metrics. The results identify challenges and guide researchers in selecting methods for analyses. 
    more » « less
  7. null (Ed.)