- Award ID(s):
- 1661412
- Publication Date:
- NSF-PAR ID:
- 10186120
- Journal Name:
- Science Advances
- Volume:
- 6
- Issue:
- 15
- Page Range or eLocation-ID:
- eaaz1708
- ISSN:
- 2375-2548
- Sponsoring Org:
- National Science Foundation
More Like this
-
LEGOs are one of the most popular toys and are known to be useful as instructional tools in STEM education. In this work, we used LEGO structures to demonstrate the energetic size effect on structural strength. Many material's fexural strength decreases with increasing structural size. We seek to demonstrate this effect in LEGO beams. Fracture experiments were performed using 3-point bend beams built of 2 X 4 LEGO blocks in a periodic staggered arrangement. LEGO wheels were used as rollers on either ends of the specimens which were weight compensated by adding counterweights. [1] Specimens were loaded by hanging weights at their midspan and the maximum sustained load was recorded. Specimens with a built-in defect (crack) of half specimen height were considered. Beam height was varied from two to 32 LEGO blocks while keeping the in-plane aspect ratio constant. The specimen thickness was kept constant at one LEGO block. Slow-motion videos and sound recordings of fractures were captured to determine how the fracture originated and propagated through the specimen. Flexural stress was calculated based on nominal specimen dimensions and fracture toughness was calculated following ASTM E-399 standard expressions from Srawley (1976). [2] The results demonstrate that the LEGO beams indeedmore »
-
Abstract Uncertainty quantification (UQ) in metal additive manufacturing (AM) has attracted tremendous interest in order to dramatically improve product reliability. Model-based UQ, which relies on the validity of a computational model, has been widely explored as a potential substitute for the time-consuming and expensive UQ solely based on experiments. However, its adoption in the practical AM process requires overcoming two main challenges: (1) the inaccurate knowledge of uncertainty sources and (2) the intrinsic uncertainty associated with the computational model. Here, we propose a data-driven framework to tackle these two challenges by combining high throughput physical/surrogate model simulations and the AM-Bench experimental data from the National Institute of Standards and Technology (NIST). We first construct a surrogate model, based on high throughput physical simulations, for predicting the three-dimensional (3D) melt pool geometry and its uncertainty with respect to AM parameters and uncertainty sources. We then employ a sequential Bayesian calibration method to perform experimental parameter calibration and model correction to significantly improve the validity of the 3D melt pool surrogate model. The application of the calibrated melt pool model to UQ of the porosity level, an important quality factor, of AM parts, demonstrates its potential use in AM quality control. Themore »
-
Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should producemore »
-
Biocompatible molecules with electronic functionality provide a promising substrate for biocompatible electronic devices and electronic interfacing with biological systems. Synthetic oligopeptides composed of an aromatic π-core flanked by oligopeptide wings are a class of molecules that can self-assemble in aqueous environments into supramolecular nanoaggregates with emergent optical and electronic activity. We present an integrated computational–experimental pipeline employing all-atom molecular dynamics simulations and experimental UV-visible spectroscopy within an active learning workflow using deep representational learning and multi-objective and multi-fidelity Bayesian optimization to design π-conjugated peptides programmed to self-assemble into elongated pseudo-1D nanoaggregates with a high degree of H-type co-facial stacking of the π-cores. We consider as our design space the 694 982 unique π-conjugated peptides comprising a quaterthiophene π-core flanked by symmetric oligopeptide wings up to five amino acids in length. After sampling only 1181 molecules (∼0.17% of the design space) by computation and 28 (∼0.004%) by experiment, we identify and experimentally validate a diversity of previously unknown high-performing molecules and extract interpretable design rules linking peptide sequence to emergent supramolecular structure and properties.
-
We consider the problem of inferring the conditional independence graph (CIG) of a high-dimensional stationary multivariate Gaussian time series. A sparse-group lasso-based frequency-domain formulation of the problem has been considered in the literature where the objective is to estimate the sparse inverse power spectral density (PSD) of the data via optimization of a sparse-group lasso based penalized log-likelihood cost function that is formulated in the frequency-domain. The CIG is then inferred from the estimated inverse PSD. Optimization in the previous approach was performed using an alternating minimization (AM) approach whose performance depends upon choice of a penalty parameter. In this paper we investigate an alternating direction method of multipliers (ADMM) approach for optimization to mitigate dependence on the penalty parameter. We also investigate selection of the tuning parameters based on Bayesian information criterion, and illustrate our approach using synthetic and real data. Comparisons with the "usual" i.i.d. modeling of time series for graph estimation are also provided.