Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
Free, publicly-accessible full text available January 1, 2024
An interdisciplinary agent-based evacuation model: integrating the natural environment, built environment, and social system for community preparedness and resilienceAbstract. Previous tsunami evacuation simulations have mostly been based on arbitrary assumptions or inputs adapted from non-emergency situations, but a few studies have used empirical behavior data. This study bridges this gap by integrating empirical decision data from surveys on local evacuation expectations and evacuation drills into an agent-based model of evacuation behavior for two Cascadia subduction zone (CSZ) communities that would be inundated within 20–40 min after a CSZ earthquake. The model also considers the impacts of liquefaction and landslides from the earthquake on tsunami evacuation. Furthermore, we integrate the slope-speed component from least-cost distance to build the simulation model that better represents the complex nature of evacuations. The simulation results indicate that milling time and the evacuation participation rate have significant nonlinear impacts on tsunami mortality estimates. When people walk faster than 1 m s−1, evacuation by foot is more effective because it avoids traffic congestion when driving. We also find that evacuation results are more sensitive to walking speed, milling time, evacuation participation, and choosing the closest safe location than to other behavioral variables. Minimum tsunami mortality results from maximizing the evacuation participation rate, minimizing milling time, and choosing the closest safe destination outside of the inundation zone. This study'smore »Free, publicly-accessible full text available January 1, 2024
Quality assessment (QA) of predicted protein tertiary structure models plays an important role in ranking and using them. With the recent development of deep learning end-to-end protein structure prediction techniques for generating highly confident tertiary structures for most proteins, it is important to explore corresponding QA strategies to evaluate and select the structural models predicted by them since these models have better quality and different properties than the models predicted by traditional tertiary structure prediction methods.
We develop EnQA, a novel graph-based 3D-equivariant neural network method that is equivariant to rotation and translation of 3D objects to estimate the accuracy of protein structural models by leveraging the structural features acquired from the state-of-the-art tertiary structure prediction method—AlphaFold2. We train and test the method on both traditional model datasets (e.g. the datasets of the Critical Assessment of Techniques for Protein Structure Prediction) and a new dataset of high-quality structural models predicted only by AlphaFold2 for the proteins whose experimental structures were released recently. Our approach achieves state-of-the-art performance on protein structural models predicted by both traditional protein structure prediction methods and the latest end-to-end deep learning method—AlphaFold2. It performs even better than the model QA scores provided by AlphaFold2 itself.more »
Availability and implementation
The source code is available at https://github.com/BioinfoMachineLearning/EnQA.
Supplementary data are available at Bioinformatics online.
Self-assembled aluminum oxyhydroxide nanorices with superior suspension stability for vaccine adjuvantFree, publicly-accessible full text available December 1, 2023
We present a precise measurement of the asymptotic normalization coefficient (ANC) for the16O ground state (GS) through the12C(11B,7Li)16O transfer reaction using the Quadrupole‐3‐Dipole (Q3D) magnetic spectrograph. The present work sheds light on the existing discrepancy of more than 2 orders of magnitude between the previously reported GS ANC values. This ANC is believed to have a strong effect on the12C(
α, γ)16O reaction rate by constraining the external capture to the16O ground state, which can interfere with the high-energy tail of the 2+subthreshold state. Based on the new ANC, we determine the astrophysical S-factor and the stellar rate of the12C( α, γ)16O reaction. An increase of up to 21% in the total reaction rate is found within the temperature range of astrophysical relevance compared with the previous recommendation of a recent review. Finally, we evaluate the impact of our new rate on the pair-instability mass gap for black holes (BH) by evolving massive helium core stars using the MESA stellar evolution code. The updated12C( α, γ)16O reaction rate decreases the lower and upper edges of the BH gap about 12% and 5%, respectively.
Node classification is of great importance among various graph mining tasks. In practice, real-world graphs generally follow the long-tail distribution, where a large number of classes only consist of limited labeled nodes. Although Graph Neural Networks (GNNs) have achieved significant improvements in node classification, their performance decreases substantially in such a few-shot scenario. The main reason can be attributed to the vast generalization gap between meta-training and meta-test due to the task variance caused by different node/class distributions in meta-tasks (i.e., node-level and class-level variance). Therefore, to effectively alleviate the impact of task variance, we propose a task-adaptive node classification framework under the few-shot learning setting. Specifically, we first accumulate meta-knowledge across classes with abundant labeled nodes. Then we transfer such knowledge to the classes with limited labeled nodes via our proposed task-adaptive modules. In particular, to accommodate the different node/class distributions among meta-tasks, we propose three essential modules to perform node-level, class-level, and task-level adaptations in each meta-task, respectively. In this way, our framework can conduct adaptations to different meta-tasks and thus advance the model generalization performance on meta-test tasks. Extensive experiments on four prevalent node classification datasets demonstrate the superiority of our framework over the state-of-the-art baselines. Ourmore »Free, publicly-accessible full text available August 14, 2023
Free, publicly-accessible full text available July 1, 2023
Free, publicly-accessible full text available August 1, 2023
Few-shot graph classification aims at predicting classes for graphs, given limited labeled graphs for each class. To tackle the bottleneck of label scarcity, recent works propose to incorporate few-shot learning frameworks for fast adaptations to graph classes with limited labeled graphs. Specifically, these works propose to accumulate meta-knowledge across diverse meta-training tasks, and then generalize such meta-knowledge to the target task with a disjoint label set. However, existing methods generally ignore task correlations among meta-training tasks while treating them independently. Nevertheless, such task correlations can advance the model generalization to the target task for better classification performance. On the other hand, it remains non-trivial to utilize task correlations due to the complex components in a large number of meta-training tasks. To deal with this, we propose a novel few-shot learning framework FAITH that captures task correlations via constructing a hierarchical task graph at different granularities. Then we further design a loss-based sampling strategy to select tasks with more correlated classes. Moreover, a task-specific classifier is proposed to utilize the learned task correlations for few-shot classification. Extensive experiments on four prevalent few-shot graph classification datasets demonstrate the superiority of FAITH over other state-of-the-art baselines.Free, publicly-accessible full text available July 1, 2023
Systems-wide analysis revealed shared and unique responses to moderate and acute high temperatures in the green alga Chlamydomonas reinhardtiiAbstract Different intensities of high temperatures affect the growth of photosynthetic cells in nature. To elucidate the underlying mechanisms, we cultivated the unicellular green alga Chlamydomonas reinhardtii under highly controlled photobioreactor conditions and revealed systems-wide shared and unique responses to 24-hour moderate (35°C) and acute (40°C) high temperatures and subsequent recovery at 25°C. We identified previously overlooked unique elements in response to moderate high temperature. Heat at 35°C transiently arrested the cell cycle followed by partial synchronization, up-regulated transcripts/proteins involved in gluconeogenesis/glyoxylate-cycle for carbon uptake and promoted growth. But 40°C disrupted cell division and growth. Both high temperatures induced photoprotection, while 40°C distorted thylakoid/pyrenoid ultrastructure, affected the carbon concentrating mechanism, and decreased photosynthetic efficiency. We demonstrated increased transcript/protein correlation during both heat treatments and hypothesize reduced post-transcriptional regulation during heat may help efficiently coordinate thermotolerance mechanisms. During recovery after both heat treatments, especially 40°C, transcripts/proteins related to DNA synthesis increased while those involved in photosynthetic light reactions decreased. We propose down-regulating photosynthetic light reactions during DNA replication benefits cell cycle resumption by reducing ROS production. Our results provide potential targets to increase thermotolerance in algae and crops.Free, publicly-accessible full text available December 1, 2023