skip to main content


Title: The materials tetrahedron has a “digital twin”
Abstract

For over three decades, the materials tetrahedron has captured the essence of materials science and engineering with its interdependent elements of processing, structure, properties, and performance. As modern computational and statistical techniques usher in a new paradigm of data-intensive scientific research and discovery, the rate at which the field of materials science and engineering capitalizes on these advances hinges on collaboration between numerous stakeholders. Here, we provide a contemporary extension to the classic materials tetrahedron with a dual framework—adapted from the concept of a “digital twin”—which offers a nexus joining materials science and information science. We believe this high-level framework, the materials–information twin tetrahedra (MITT), will provide stakeholders with a platform to contextualize, translate, and direct efforts in the pursuit of propelling materials science and technology forward.

Impact statement

This article provides a contemporary reimagination of the classic materials tetrahedron by augmenting it with parallel notions from information science. Since the materials tetrahedron (processing, structure, properties, performance) made its first debut, advances in computational and informational tools have transformed the landscape and outlook of materials research and development. Drawing inspiration from the notion of a digital twin, the materials–information twin tetrahedra (MITT) framework captures a holistic perspective of materials science and engineering in the presence of modern digital tools and infrastructures. This high-level framework incorporates sustainability and FAIR data principles (Findable, Accessible, Interoperable, Reusable)—factors that recognize how systems impact and interact with other systems—in addition to the data and information flows that play a pivotal role in knowledge generation. The goal of the MITT framework is to give stakeholders from academia, industry, and government a communication tool for focusing efforts around the design, development, and deployment of materials in the years ahead.

Graphic abstract 
more » « less
Award ID(s):
1835677 1835648
NSF-PAR ID:
10369591
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Cambridge University Press (CUP)
Date Published:
Journal Name:
MRS Bulletin
Volume:
47
Issue:
4
ISSN:
0883-7694
Page Range / eLocation ID:
p. 379-388
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Accelerating the design and development of new advanced materials is one of the priorities in modern materials science. These efforts are critically dependent on the development of comprehensive materials cyberinfrastructures which enable efficient data storage, management, sharing, and collaboration as well as integration of computational tools that help establish processing–structure–property relationships. In this contribution, we present implementation of such computational tools into a cloud-based platform called BisQue (Kvilekval et al., Bioinformatics 26(4):554, 2010). We first describe the current state of BisQue as an open-source platform for multidisciplinary research in the cloud and its potential for 3D materials science. We then demonstrate how new computational tools, primarily aimed at processing–structure–property relationships, can be implemented into the system. Specifically, in this work, we develop a module for BisQue that enables microstructure-sensitive predictions of effective yield strength of two-phase materials. Towards this end, we present an implementation of a computationally efficient data-driven model into the BisQue platform. The new module is made available online (web address:https://bisque.ece.ucsb.edu/module_service/Composite_Strength/) and can be used from a web browser without any special software and with minimal computational requirements on the user end. The capabilities of the module for rapid property screening are demonstrated in case studies with two different methodologies based on datasets containing 3D microstructure information from (i) synthetic generation and (ii) sampling large 3D volumes obtained in experiments.

     
    more » « less
  2. Abstract This project is funded by the US National Science Foundation (NSF) through their NSF RAPID program under the title “Modeling Corona Spread Using Big Data Analytics.” The project is a joint effort between the Department of Computer & Electrical Engineering and Computer Science at FAU and a research group from LexisNexis Risk Solutions. The novel coronavirus Covid-19 originated in China in early December 2019 and has rapidly spread to many countries around the globe, with the number of confirmed cases increasing every day. Covid-19 is officially a pandemic. It is a novel infection with serious clinical manifestations, including death, and it has reached at least 124 countries and territories. Although the ultimate course and impact of Covid-19 are uncertain, it is not merely possible but likely that the disease will produce enough severe illness to overwhelm the worldwide health care infrastructure. Emerging viral pandemics can place extraordinary and sustained demands on public health and health systems and on providers of essential community services. Modeling the Covid-19 pandemic spread is challenging. But there are data that can be used to project resource demands. Estimates of the reproductive number (R) of SARS-CoV-2 show that at the beginning of the epidemic, each infected person spreads the virus to at least two others, on average (Emanuel et al. in N Engl J Med. 2020, Livingston and Bucher in JAMA 323(14):1335, 2020). A conservatively low estimate is that 5 % of the population could become infected within 3 months. Preliminary data from China and Italy regarding the distribution of case severity and fatality vary widely (Wu and McGoogan in JAMA 323(13):1239–42, 2020). A recent large-scale analysis from China suggests that 80 % of those infected either are asymptomatic or have mild symptoms; a finding that implies that demand for advanced medical services might apply to only 20 % of the total infected. Of patients infected with Covid-19, about 15 % have severe illness and 5 % have critical illness (Emanuel et al. in N Engl J Med. 2020). Overall, mortality ranges from 0.25 % to as high as 3.0 % (Emanuel et al. in N Engl J Med. 2020, Wilson et al. in Emerg Infect Dis 26(6):1339, 2020). Case fatality rates are much higher for vulnerable populations, such as persons over the age of 80 years (> 14 %) and those with coexisting conditions (10 % for those with cardiovascular disease and 7 % for those with diabetes) (Emanuel et al. in N Engl J Med. 2020). Overall, Covid-19 is substantially deadlier than seasonal influenza, which has a mortality of roughly 0.1 %. Public health efforts depend heavily on predicting how diseases such as those caused by Covid-19 spread across the globe. During the early days of a new outbreak, when reliable data are still scarce, researchers turn to mathematical models that can predict where people who could be infected are going and how likely they are to bring the disease with them. These computational methods use known statistical equations that calculate the probability of individuals transmitting the illness. Modern computational power allows these models to quickly incorporate multiple inputs, such as a given disease’s ability to pass from person to person and the movement patterns of potentially infected people traveling by air and land. This process sometimes involves making assumptions about unknown factors, such as an individual’s exact travel pattern. By plugging in different possible versions of each input, however, researchers can update the models as new information becomes available and compare their results to observed patterns for the illness. In this paper we describe the development a model of Corona spread by using innovative big data analytics techniques and tools. We leveraged our experience from research in modeling Ebola spread (Shaw et al. Modeling Ebola Spread and Using HPCC/KEL System. In: Big Data Technologies and Applications 2016 (pp. 347-385). Springer, Cham) to successfully model Corona spread, we will obtain new results, and help in reducing the number of Corona patients. We closely collaborated with LexisNexis, which is a leading US data analytics company and a member of our NSF I/UCRC for Advanced Knowledge Enablement. The lack of a comprehensive view and informative analysis of the status of the pandemic can also cause panic and instability within society. Our work proposes the HPCC Systems Covid-19 tracker, which provides a multi-level view of the pandemic with the informative virus spreading indicators in a timely manner. The system embeds a classical epidemiological model known as SIR and spreading indicators based on causal model. The data solution of the tracker is built on top of the Big Data processing platform HPCC Systems, from ingesting and tracking of various data sources to fast delivery of the data to the public. The HPCC Systems Covid-19 tracker presents the Covid-19 data on a daily, weekly, and cumulative basis up to global-level and down to the county-level. It also provides statistical analysis for each level such as new cases per 100,000 population. The primary analysis such as Contagion Risk and Infection State is based on causal model with a seven-day sliding window. Our work has been released as a publicly available website to the world and attracted a great volume of traffic. The project is open-sourced and available on GitHub. The system was developed on the LexisNexis HPCC Systems, which is briefly described in the paper. 
    more » « less
  3. Abstract  
    more » « less
  4. Abstract

    Modeling and simulation is transforming modern materials science, becoming an important tool for the discovery of new materials and material phenomena, for gaining insight into the processes that govern materials behavior, and, increasingly, for quantitative predictions that can be used as part of a design tool in full partnership with experimental synthesis and characterization. Modeling and simulation is the essential bridge from good science to good engineering, spanning from fundamental understanding of materials behavior to deliberate design of new materials technologies leveraging new properties and processes. This Roadmap presents a broad overview of the extensive impact computational modeling has had in materials science in the past few decades, and offers focused perspectives on where the path forward lies as this rapidly expanding field evolves to meet the challenges of the next few decades. The Roadmap offers perspectives on advances within disciplines as diverse as phase field methods to model mesoscale behavior and molecular dynamics methods to deduce the fundamental atomic-scale dynamical processes governing materials response, to the challenges involved in the interdisciplinary research that tackles complex materials problems where the governing phenomena span different scales of materials behavior requiring multiscale approaches. The shift from understanding fundamental materials behavior to development of quantitative approaches to explain and predict experimental observations requires advances in the methods and practice in simulations for reproducibility and reliability, and interacting with a computational ecosystem that integrates new theory development, innovative applications, and an increasingly integrated software and computational infrastructure that takes advantage of the increasingly powerful computational methods and computing hardware.

     
    more » « less
  5. Abstract Why the new findings matter

    The process of teaching and learning is complex, multifaceted and dynamic. This paper contributes a seminal resource to highlight the digitisation of the educational sciences by demonstrating how new machine learning methods can be effectively and reliably used in research, education and practical application.

    Implications for educational researchers and policy makers

    The progressing digitisation of societies around the globe and the impact of the SARS‐COV‐2 pandemic have highlighted the vulnerabilities and shortcomings of educational systems. These developments have shown the necessity to provide effective educational processes that can support sometimes overwhelmed teachers to digitally impart knowledge on the plan of many governments and policy makers. Educational scientists, corporate partners and stakeholders can make use of machine learning techniques to develop advanced, scalable educational processes that account for individual needs of learners and that can complement and support existing learning infrastructure. The proper use of machine learning methods can contribute essential applications to the educational sciences, such as (semi‐)automated assessments, algorithmic‐grading, personalised feedback and adaptive learning approaches. However, these promises are strongly tied to an at least basic understanding of the concepts of machine learning and a degree of data literacy, which has to become the standard in education and the educational sciences.

    Demonstrating both the promises and the challenges that are inherent to the collection and the analysis of large educational data with machine learning, this paper covers the essential topics that their application requires and provides easy‐to‐follow resources and code to facilitate the process of adoption.

     
    more » « less