skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2016981

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Irradiation increases the yield stress and embrittles light water reactor (LWR) pressure vessel steels. In this study, we demonstrate some of the potential benefits and risks of using machine learning models to predict irradiation hardening extrapolated to low flux, high fluence, extended life conditions. The machine learning training data included the Irradiation Variable for lower flux irradiations up to an intermediate fluence, plus the Belgian Reactor 2 and Advanced Test Reactor 1 for very high flux irradiations, up to very high fluence. Notably, the machine learning model predictions for the high fluence, intermediate flux Advanced Test Reactor 2 irradiations are superior to extrapolations of existing hardening models. The successful extrapolations showed that machine learning models are capable of capturing key intermediate flux effects at high fluence. Similar approaches, applied to expanded databases, could be used to predict hardening in LWRs under life-extension conditions. 
    more » « less
  2. Accurate and comprehensive material databases extracted from research papers are crucial for ma- terials science and engineering, but their development requires significant human effort. With large language models (LLMs) transforming the way humans interact with text, LLMs provide an oppor- tunity to revolutionize data extraction. In this study, we demonstrate a simple and efficient method for extracting materials data from full-text research papers leveraging the capabilities of LLMs com- bined with human supervision. This approach is particularly suitable for mid-sized databases and requires minimal to no coding or prior knowledge about the extracted property. It offers high recall and nearly perfect precision in the resulting database. The method is easily adaptable to new and superior language models, ensuring continued utility. We show this by evaluating and comparing its performance on GPT-3 and GPT-3.5/4 (which underlie ChatGPT), as well as free alternatives such as BART and DeBERTaV3. We provide a detailed analysis of the method’s performance in extracting sentences containing bulk modulus data, achieving up to 90% precision at 96% recall, depending on the amount of human effort involved. We further demonstrate the method’s broader effectiveness by developing a database of critical cooling rates for metallic glasses over twice the size of previous human curated databases. 
    more » « less