Accelerating the design and development of new advanced materials is one of the priorities in modern materials science. These efforts are critically dependent on the development of comprehensive materials cyberinfrastructures which enable efficient data storage, management, sharing, and collaboration as well as integration of computational tools that help establish processing–structure–property relationships. In this contribution, we present implementation of such computational tools into a cloud-based platform called BisQue (Kvilekval et al., Bioinformatics 26(4):554, 2010). We first describe the current state of BisQue as an open-source platform for multidisciplinary research in the cloud and its potential for 3D materials science. We then demonstrate how new computational tools, primarily aimed at processing–structure–property relationships, can be implemented into the system. Specifically, in this work, we develop a module for BisQue that enables microstructure-sensitive predictions of effective yield strength of two-phase materials. Towards this end, we present an implementation of a computationally efficient data-driven model into the BisQue platform. The new module is made available online (web address:
For over three decades, the materials tetrahedron has captured the essence of materials science and engineering with its interdependent elements of processing, structure, properties, and performance. As modern computational and statistical techniques usher in a new paradigm of data-intensive scientific research and discovery, the rate at which the field of materials science and engineering capitalizes on these advances hinges on collaboration between numerous stakeholders. Here, we provide a contemporary extension to the classic materials tetrahedron with a dual framework—adapted from the concept of a “digital twin”—which offers a nexus joining materials science and information science. We believe this high-level framework, the materials–information twin tetrahedra (MITT), will provide stakeholders with a platform to contextualize, translate, and direct efforts in the pursuit of propelling materials science and technology forward.
This article provides a contemporary reimagination of the classic materials tetrahedron by augmenting it with parallel notions from information science. Since the materials tetrahedron (processing, structure, properties, performance) made its first debut, advances in computational and informational tools have transformed the landscape and outlook of materials research and development. Drawing inspiration from the notion of a digital twin, the materials–information twin tetrahedra (MITT) framework captures a holistic perspective of materials science and engineering in the presence of modern digital tools and infrastructures. This high-level framework incorporates sustainability and FAIR data principles (Findable, Accessible, Interoperable, Reusable)—factors that recognize how systems impact and interact with other systems—in addition to the data and information flows that play a pivotal role in knowledge generation. The goal of the MITT framework is to give stakeholders from academia, industry, and government a communication tool for focusing efforts around the design, development, and deployment of materials in the years ahead.
- NSF-PAR ID:
- 10369591
- Publisher / Repository:
- Cambridge University Press (CUP)
- Date Published:
- Journal Name:
- MRS Bulletin
- Volume:
- 47
- Issue:
- 4
- ISSN:
- 0883-7694
- Page Range / eLocation ID:
- p. 379-388
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract https://bisque.ece.ucsb.edu/module_service/Composite_Strength/ ) and can be used from a web browser without any special software and with minimal computational requirements on the user end. The capabilities of the module for rapid property screening are demonstrated in case studies with two different methodologies based on datasets containing 3D microstructure information from (i) synthetic generation and (ii) sampling large 3D volumes obtained in experiments. -
Abstract This project is funded by the US National Science Foundation (NSF) through their NSF RAPID program under the title “Modeling Corona Spread Using Big Data Analytics.” The project is a joint effort between the Department of Computer & Electrical Engineering and Computer Science at FAU and a research group from LexisNexis Risk Solutions. The novel coronavirus Covid-19 originated in China in early December 2019 and has rapidly spread to many countries around the globe, with the number of confirmed cases increasing every day. Covid-19 is officially a pandemic. It is a novel infection with serious clinical manifestations, including death, and it has reached at least 124 countries and territories. Although the ultimate course and impact of Covid-19 are uncertain, it is not merely possible but likely that the disease will produce enough severe illness to overwhelm the worldwide health care infrastructure. Emerging viral pandemics can place extraordinary and sustained demands on public health and health systems and on providers of essential community services. Modeling the Covid-19 pandemic spread is challenging. But there are data that can be used to project resource demands. Estimates of the reproductive number (R) of SARS-CoV-2 show that at the beginning of the epidemic, each infected person spreads the virus to at least two others, on average (Emanuel et al. in N Engl J Med. 2020, Livingston and Bucher in JAMA 323(14):1335, 2020). A conservatively low estimate is that 5 % of the population could become infected within 3 months. Preliminary data from China and Italy regarding the distribution of case severity and fatality vary widely (Wu and McGoogan in JAMA 323(13):1239–42, 2020). A recent large-scale analysis from China suggests that 80 % of those infected either are asymptomatic or have mild symptoms; a finding that implies that demand for advanced medical services might apply to only 20 % of the total infected. Of patients infected with Covid-19, about 15 % have severe illness and 5 % have critical illness (Emanuel et al. in N Engl J Med. 2020). Overall, mortality ranges from 0.25 % to as high as 3.0 % (Emanuel et al. in N Engl J Med. 2020, Wilson et al. in Emerg Infect Dis 26(6):1339, 2020). Case fatality rates are much higher for vulnerable populations, such as persons over the age of 80 years (> 14 %) and those with coexisting conditions (10 % for those with cardiovascular disease and 7 % for those with diabetes) (Emanuel et al. in N Engl J Med. 2020). Overall, Covid-19 is substantially deadlier than seasonal influenza, which has a mortality of roughly 0.1 %. Public health efforts depend heavily on predicting how diseases such as those caused by Covid-19 spread across the globe. During the early days of a new outbreak, when reliable data are still scarce, researchers turn to mathematical models that can predict where people who could be infected are going and how likely they are to bring the disease with them. These computational methods use known statistical equations that calculate the probability of individuals transmitting the illness. Modern computational power allows these models to quickly incorporate multiple inputs, such as a given disease’s ability to pass from person to person and the movement patterns of potentially infected people traveling by air and land. This process sometimes involves making assumptions about unknown factors, such as an individual’s exact travel pattern. By plugging in different possible versions of each input, however, researchers can update the models as new information becomes available and compare their results to observed patterns for the illness. In this paper we describe the development a model of Corona spread by using innovative big data analytics techniques and tools. We leveraged our experience from research in modeling Ebola spread (Shaw et al. Modeling Ebola Spread and Using HPCC/KEL System. In: Big Data Technologies and Applications 2016 (pp. 347-385). Springer, Cham) to successfully model Corona spread, we will obtain new results, and help in reducing the number of Corona patients. We closely collaborated with LexisNexis, which is a leading US data analytics company and a member of our NSF I/UCRC for Advanced Knowledge Enablement. The lack of a comprehensive view and informative analysis of the status of the pandemic can also cause panic and instability within society. Our work proposes the HPCC Systems Covid-19 tracker, which provides a multi-level view of the pandemic with the informative virus spreading indicators in a timely manner. The system embeds a classical epidemiological model known as SIR and spreading indicators based on causal model. The data solution of the tracker is built on top of the Big Data processing platform HPCC Systems, from ingesting and tracking of various data sources to fast delivery of the data to the public. The HPCC Systems Covid-19 tracker presents the Covid-19 data on a daily, weekly, and cumulative basis up to global-level and down to the county-level. It also provides statistical analysis for each level such as new cases per 100,000 population. The primary analysis such as Contagion Risk and Infection State is based on causal model with a seven-day sliding window. Our work has been released as a publicly available website to the world and attracted a great volume of traffic. The project is open-sourced and available on GitHub. The system was developed on the LexisNexis HPCC Systems, which is briefly described in the paper.more » « less
-
The biodesign studio: Constructions and reflections of high school youth on making with living media
Abstract Most constructionist efforts have focused on supporting learners with tools for designing with digital and/or physical media. Recent developments in life sciences now allow K‐12 learners to design with living materials, or to biodesign. In this paper, we report on the development of a biodesign studio activity called biocakes wherein teams of high school youth genetically modified a yeast strain to bake a nutrient fortified food product. Using a qualitative deductive analysis of classroom observations, interviews and projects, we examined high school youth experiences and reflections of these activities to answer three research questions: (1) What kind of artefacts did youth make with living media? (2) How did youth engage in biodesign? and (3) What did youth have to say about their biodesign experiences? We discuss how our analyses of biodesign applications highlight the importance of assembly practices, social engagement and imaginative design for constructionist learning. These insights provide compelling examples for designing with living media in K‐12 education.
Practitioner notes What is already known about this topic?
Constructionist perspectives shape important ideas about what we know about science learning, and notably in computer science fields.
One widely taken up example includes making, where learners use crafts to make interactive computational objects.
In life science, constructionism also provides insights about learning using digital media to model biological systems or interact with living materials.
What this paper adds?
This paper extends these perspectives by examining production and engagement when learners construct
with living materials—an approach that has only recently been possible with the development of accessible wet lab tools.We frame learning activities as a studio model—that emphasised application design, iteration and critique—to better assess the roles assembly, construction and speculative design play in production that uses living materials.
Our findings suggest that assembly is important for creating accessible points of entry to complex biological fabrication processes.
We also find that speculative design provides an opportunity for learners to extend their existing abilities beyond what is otherwise available given their expertise or access to resources, and thus expansively explore related ideas.
Implications for practice and/or policy
Assembly and speculative design have important places in constructionist‐driven production with living materials and could, therefore, be leveraged in practice.
-
Abstract Modeling and simulation is transforming modern materials science, becoming an important tool for the discovery of new materials and material phenomena, for gaining insight into the processes that govern materials behavior, and, increasingly, for quantitative predictions that can be used as part of a design tool in full partnership with experimental synthesis and characterization. Modeling and simulation is the essential bridge from good science to good engineering, spanning from fundamental understanding of materials behavior to deliberate design of new materials technologies leveraging new properties and processes. This Roadmap presents a broad overview of the extensive impact computational modeling has had in materials science in the past few decades, and offers focused perspectives on where the path forward lies as this rapidly expanding field evolves to meet the challenges of the next few decades. The Roadmap offers perspectives on advances within disciplines as diverse as phase field methods to model mesoscale behavior and molecular dynamics methods to deduce the fundamental atomic-scale dynamical processes governing materials response, to the challenges involved in the interdisciplinary research that tackles complex materials problems where the governing phenomena span different scales of materials behavior requiring multiscale approaches. The shift from understanding fundamental materials behavior to development of quantitative approaches to explain and predict experimental observations requires advances in the methods and practice in simulations for reproducibility and reliability, and interacting with a computational ecosystem that integrates new theory development, innovative applications, and an increasingly integrated software and computational infrastructure that takes advantage of the increasingly powerful computational methods and computing hardware.
-
Abstract Machine learning (ML) provides a powerful framework for the analysis of high‐dimensional datasets by modelling complex relationships, often encountered in modern data with many variables, cases and potentially non‐linear effects. The impact of ML methods on research and practical applications in the educational sciences is still limited, but continuously grows, as larger and more complex datasets become available through massive open online courses (MOOCs) and large‐scale investigations. The educational sciences are at a crucial pivot point, because of the anticipated impact ML methods hold for the field. To provide educational researchers with an elaborate introduction to the topic, we provide an instructional summary of the opportunities and challenges of ML for the educational sciences, show how a look at related disciplines can help learning from their experiences, and argue for a philosophical shift in model evaluation. We demonstrate how the overall quality of data analysis in educational research can benefit from these methods and show how ML can play a decisive role in the validation of empirical models. Specifically, we (1) provide an overview of the types of data suitable for ML and (2) give practical advice for the application of ML methods. In each section, we provide analytical examples and reproducible R code. Also, we provide an extensive Appendix on ML‐based applications for education. This instructional summary will help educational scientists and practitioners to prepare for the promises and threats that come with the shift towards digitisation and large‐scale assessment in education.
Context and implications Rationale for this study In 2020, the worldwide SARS‐COV‐2 pandemic forced the educational sciences to perform a rapid paradigm shift with classrooms going online around the world—a hardly novel but now strongly catalysed development. In the context of data‐driven education, this paper demonstrates that the widespread adoption of machine learning techniques is central for the educational sciences and shows how these methods will become crucial tools in the collection and analysis of data and in concrete educational applications. Helping to leverage the opportunities and to avoid the common pitfalls of machine learning, this paper provides educators with the theoretical, conceptual and practical essentials.
Why the new findings matter The process of teaching and learning is complex, multifaceted and dynamic. This paper contributes a seminal resource to highlight the digitisation of the educational sciences by demonstrating how new machine learning methods can be effectively and reliably used in research, education and practical application.
Implications for educational researchers and policy makers The progressing digitisation of societies around the globe and the impact of the SARS‐COV‐2 pandemic have highlighted the vulnerabilities and shortcomings of educational systems. These developments have shown the necessity to provide effective educational processes that can support sometimes overwhelmed teachers to digitally impart knowledge on the plan of many governments and policy makers. Educational scientists, corporate partners and stakeholders can make use of machine learning techniques to develop advanced, scalable educational processes that account for individual needs of learners and that can complement and support existing learning infrastructure. The proper use of machine learning methods can contribute essential applications to the educational sciences, such as (semi‐)automated assessments, algorithmic‐grading, personalised feedback and adaptive learning approaches. However, these promises are strongly tied to an at least basic understanding of the concepts of machine learning and a degree of data literacy, which has to become the standard in education and the educational sciences.
Demonstrating both the promises and the challenges that are inherent to the collection and the analysis of large educational data with machine learning, this paper covers the essential topics that their application requires and provides easy‐to‐follow resources and code to facilitate the process of adoption.