skip to main content

Title: Deep learning predicts path-dependent plasticity
Plasticity theory aims at describing the yield loci and work hardening of a material under general deformation states. Most of its complexity arises from the nontrivial dependence of the yield loci on the complete strain history of a material and its microstructure. This motivated 3 ingenious simplifications that underpinned a century of developments in this field: 1) yield criteria describing yield loci location; 2) associative or nonassociative flow rules defining the direction of plastic flow; and 3) effective stress–strain laws consistent with the plastic work equivalence principle. However, 2 key complications arise from these simplifications. First, finding equations that describe these 3 assumptions for materials with complex microstructures is not trivial. Second, yield surface evolution needs to be traced iteratively, i.e., through a return mapping algorithm. Here, we show that these assumptions are not needed in the context of sequence learning when using recurrent neural networks, diverting the above-mentioned complications. This work offers an alternative to currently established plasticity formulations by providing the foundations for finding history- and microstructure-dependent constitutive models through deep learning.
Authors:
; ; ; ; ;
Award ID(s):
1646592
Publication Date:
NSF-PAR ID:
10129616
Journal Name:
Proceedings of the National Academy of Sciences
Volume:
116
Issue:
52
Page Range or eLocation-ID:
26414 to 26420
ISSN:
0027-8424
Sponsoring Org:
National Science Foundation
More Like this
  1. Conventionally, neural network constitutive laws for path-dependent elasto-plastic solids are trained via supervised learning performed on recurrent neural network, with the time history of strain as input and the stress as input. However, training neural network to replicate path-dependent constitutive responses require significant more amount of data due to the path dependence. This demand on diverse and abundance of accurate data, as well as the lack of interpretability to guide the data generation process, could become major roadblocks for engineering applications. In this work, we attempt to simplify these training processes and improve the interpretability of the trained models bymore »breaking down the training of material models into multiple supervised machine learning programs for elasticity, initial yielding and hardening laws that can be conducted sequentially. To predict pressure-sensitivity and rate dependence of the plastic responses, we reformulate the Hamliton-Jacobi equation such that the yield function is parametrized in the product space spanned by the principle stress, the accumulated plastic strain and time. To test the versatility of the neural network meta-modeling framework, we conduct multiple numerical experiments where neural networks are trained and validated against (1) data generated from known benchmark models, (2) data obtained from physical experiments and (3) data inferred from homogenizing sub-scale direct numerical simulations of microstructures. The neural network model is also incorporated into an offline FFT-FEM model to improve the efficiency of the multiscale calculations.« less
  2. This paper is devoted to the development of a continuum theory for materials having granular microstructure and accounting for some dissipative phenomena like damage and plasticity. The continuum description is constructed by means of purely mechanical concepts, assuming expressions of elastic and dissipation energies as well as postulating a hemi-variational principle, without incorporating any additional postulate like flow rules. Granular micromechanics is connected kinematically to the continuum scale through Piola's ansatz. Mechanically meaningful objective kinematic descriptors aimed at accounting for grain-grain relative displacements in finite deformations are proposed. Karush-Kuhn-Tucker (KKT) type conditions, providing evolution equations for damage and plastic variablesmore »associated to grain-grain interactions, are derived solely from the fundamental postulates. Numerical experiments have been performed to investigate the applicability of the model. Cyclic loading-unloading histories have been considered to elucidate the material-hysteretic features of the continuum, which emerge from simple grain-grain interactions. We also assess the competition between damage and plasticity, each having an effect on the other. Further, the evolution of the load-free shape is shown not only to assess the plastic behavior, but also to make tangible the point that, in the proposed approach, plastic strain is found to be intrinsically compatible with the existence of a placement function.« less
  3. The PM4Silt plasticity model for representing low-plasticity silts and clays in geotechnical earthquake engineering applications is presented herein. The PM4Silt model builds on the framework of the stress-ratio controlled, critical state compatible, bounding surface plasticity PM4Sand model (version 3) described in Boulanger and Ziotopoulou (2015) and Ziotopoulou and Boulanger (2016). Modifications to the model were developed and implemented to improve its ability to approximate undrained monotonic and cyclic loading responses of low-plasticity silts and clays, as opposed to those for purely nonplastic silts or sands. Emphasis was given to obtaining reasonable approximations of undrained monotonic shear strengths, undrained cyclic shearmore »strengths, and shear modulus reduction and hysteretic damping responses across a range of initial static shear stress and overburden stress conditions. The model does not include a cap, and therefore is not suited for simulating consolidation settlements or strength evolution with consolidation stress history. The model is cast in terms of the state parameter relative to a linear critical state line in void ratio versus logarithm of mean effective stress. The primary input parameters are the undrained shear strength ratio (or undrained shear strength), the shear modulus coefficient, the contraction rate parameter, and an optional post-strong-shaking shear strength reduction factor. All secondary input parameters are assigned default values based on a generalized calibration. Secondary parameters that are most likely to warrant adjustment based on site-specific laboratory test data include the shear modulus exponent, plastic modulus coefficient (adjusts modulus reduction with shear strain), bounding stress ratio parameters (affect peak friction angles and undrained stress paths), fabric related parameters (affect rate of shear strain accumulation at larger strains and shape of stress-strain hysteresis loops), maximum excess pore pressure ratio, initial void ratio, and compressibility index. The model is coded as a user defined material in a dynamic link library (DLL) for use with the commercial program FLAC 8.0 (Itasca 2016). The numerical implementation and DLL module are described. The behavior of the model is illustrated by simulations of element loading tests covering a range of conditions, including undrained monotonic and cyclic loading under a range of initial confining and shear stress conditions. The model is shown to provide reasonable approximations of behaviors important to many earthquake engineering applications and to be relatively easy to calibrate.« less
  4. This work systematically investigates the texture-property linkages in hexagonal close-packed (hexagonal) materials using a three-dimensional computational crystal plasticity approach. Magnesium and its alloys are considered as a model system. We perform full-field, large-strain, micromechanical simulations using a wide range of surrogate textures that also sample several experimental datasets for a range of Mg alloys. The role of textural variability and the associated sensitivity of deformation mechanisms on the evolution of the macroscopic plastic anisotropy and strength asymmetry is mapped under uniaxial tensile and compressive loading along the material principal and off-axes orientations. To assess the role of crystallographic plastic anisotropy,more »two distinct material datasets are simulated, which represent pure and alloyed magnesium. The results provide insights into experimental observations reported for magnesium alloys over a range of materials textures. We discuss potential implications on the damage tolerance from the aggregate plastic anisotropy arising from intrinsic crystallographic and textural effects.« less
  5. Obeid, Iyad ; Selesnick, Ivan ; Picone, Joseph (Ed.)
    The Temple University Hospital Seizure Detection Corpus (TUSZ) [1] has been in distribution since April 2017. It is a subset of the TUH EEG Corpus (TUEG) [2] and the most frequently requested corpus from our 3,000+ subscribers. It was recently featured as the challenge task in the Neureka 2020 Epilepsy Challenge [3]. A summary of the development of the corpus is shown below in Table 1. The TUSZ Corpus is a fully annotated corpus, which means every seizure event that occurs within its files has been annotated. The data is selected from TUEG using a screening process that identifies filesmore »most likely to contain seizures [1]. Approximately 7% of the TUEG data contains a seizure event, so it is important we triage TUEG for high yield data. One hour of EEG data requires approximately one hour of human labor to complete annotation using the pipeline described below, so it is important from a financial standpoint that we accurately triage data. A summary of the labels being used to annotate the data is shown in Table 2. Certain standards are put into place to optimize the annotation process while not sacrificing consistency. Due to the nature of EEG recordings, some records start off with a segment of calibration. This portion of the EEG is instantly recognizable and transitions from what resembles lead artifact to a flat line on all the channels. For the sake of seizure annotation, the calibration is ignored, and no time is wasted on it. During the identification of seizure events, a hard “3 second rule” is used to determine whether two events should be combined into a single larger event. This greatly reduces the time that it takes to annotate a file with multiple events occurring in succession. In addition to the required minimum 3 second gap between seizures, part of our standard dictates that no seizure less than 3 seconds be annotated. Although there is no universally accepted definition for how long a seizure must be, we find that it is difficult to discern with confidence between burst suppression or other morphologically similar impressions when the event is only a couple seconds long. This is due to several reasons, the most notable being the lack of evolution which is oftentimes crucial for the determination of a seizure. After the EEG files have been triaged, a team of annotators at NEDC is provided with the files to begin data annotation. An example of an annotation is shown in Figure 1. A summary of the workflow for our annotation process is shown in Figure 2. Several passes are performed over the data to ensure the annotations are accurate. Each file undergoes three passes to ensure that no seizures were missed or misidentified. The first pass of TUSZ involves identifying which files contain seizures and annotating them using our annotation tool. The time it takes to fully annotate a file can vary drastically depending on the specific characteristics of each file; however, on average a file containing multiple seizures takes 7 minutes to fully annotate. This includes the time that it takes to read the patient report as well as traverse through the entire file. Once an event has been identified, the start and stop time for the seizure is stored in our annotation tool. This is done on a channel by channel basis resulting in an accurate representation of the seizure spreading across different parts of the brain. Files that do not contain any seizures take approximately 3 minutes to complete. Even though there is no annotation being made, the file is still carefully examined to make sure that nothing was overlooked. In addition to solely scrolling through a file from start to finish, a file is often examined through different lenses. Depending on the situation, low pass filters are used, as well as increasing the amplitude of certain channels. These techniques are never used in isolation and are meant to further increase our confidence that nothing was missed. Once each file in a given set has been looked at once, the annotators start the review process. The reviewer checks a file and comments any changes that they recommend. This takes about 3 minutes per seizure containing file, which is significantly less time than the first pass. After each file has been commented on, the third pass commences. This step takes about 5 minutes per seizure file and requires the reviewer to accept or reject the changes that the second reviewer suggested. Since tangible changes are made to the annotation using the annotation tool, this step takes a bit longer than the previous one. Assuming 18% of the files contain seizures, a set of 1,000 files takes roughly 127 work hours to annotate. Before an annotator contributes to the data interpretation pipeline, they are trained for several weeks on previous datasets. A new annotator is able to be trained using data that resembles what they would see under normal circumstances. An additional benefit of using released data to train is that it serves as a means of constantly checking our work. If a trainee stumbles across an event that was not previously annotated, it is promptly added, and the data release is updated. It takes about three months to train an annotator to a point where their annotations can be trusted. Even though we carefully screen potential annotators during the hiring process, only about 25% of the annotators we hire survive more than one year doing this work. To ensure that the annotators are consistent in their annotations, the team conducts an interrater agreement evaluation periodically to ensure that there is a consensus within the team. The annotation standards are discussed in Ochal et al. [4]. An extended discussion of interrater agreement can be found in Shah et al. [5]. The most recent release of TUSZ, v1.5.2, represents our efforts to review the quality of the annotations for two upcoming challenges we hosted: an internal deep learning challenge at IBM [6] and the Neureka 2020 Epilepsy Challenge [3]. One of the biggest changes that was made to the annotations was the imposition of a stricter standard for determining the start and stop time of a seizure. Although evolution is still included in the annotations, the start times were altered to start when the spike-wave pattern becomes distinct as opposed to merely when the signal starts to shift from background. This cuts down on background that was mislabeled as a seizure. For seizure end times, all post ictal slowing that was included was removed. The recent release of v1.5.2 did not include any additional data files. Two EEG files had been added because, originally, they were corrupted in v1.5.1 but were able to be retrieved and added for the latest release. The progression from v1.5.0 to v1.5.1 and later to v1.5.2, included the re-annotation of all of the EEG files in order to develop a confident dataset regarding seizure identification. Starting with v1.4.0, we have also developed a blind evaluation set that is withheld for use in competitions. The annotation team is currently working on the next release for TUSZ, v1.6.0, which is expected to occur in August 2020. It will include new data from 2016 to mid-2019. This release will contain 2,296 files from 2016 as well as several thousand files representing the remaining data through mid-2019. In addition to files that were obtained with our standard triaging process, a part of this release consists of EEG files that do not have associated patient reports. Since actual seizure events are in short supply, we are mining a large chunk of data for which we have EEG recordings but no reports. Some of this data contains interesting seizure events collected during long-term EEG sessions or data collected from patients with a history of frequent seizures. It is being mined to increase the number of files in the corpus that have at least one seizure event. We expect v1.6.0 to be released before IEEE SPMB 2020. The TUAR Corpus is an open-source database that is currently available for use by any registered member of our consortium. To register and receive access, please follow the instructions provided at this web page: https://www.isip.piconepress.com/projects/tuh_eeg/html/downloads.shtml. The data is located here: https://www.isip.piconepress.com/projects/tuh_eeg/downloads/tuh_eeg_artifact/v2.0.0/.« less