Evolution of damage in grade 2 and grade 4 titanium sheets during cyclic bending under tension and simple tension
- PAR ID:
- 10627299
- Publisher / Repository:
- Elsevier
- Date Published:
- Journal Name:
- Materials Characterization
- Volume:
- 219
- Issue:
- C
- ISSN:
- 1044-5803
- Page Range / eLocation ID:
- 114624
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract Deep learning requires solving a nonconvex optimization problem of a large size to learn a deep neural network (DNN). The current deep learning model is of asingle-grade, that is, it trains a DNN end-to-end, by solving a single nonconvex optimization problem. When the layer number of the neural network is large, it is computationally challenging to carry out such a task efficiently. The complexity of the task comes from learning all weight matrices and bias vectors from one single nonconvex optimization problem of a large size. Inspired by the human education process which arranges learning in grades, we propose a multi-grade learning model: instead of solving one single optimization problem of a large size, we successively solve a number of optimization problems of small sizes, which are organized in grades, to learn a shallow neural network (a network having a few hidden layers) for each grade. Specifically, the current grade is to learn the leftover from the previous grade. In each of the grades, we learn a shallow neural network stacked on the top of the neural network, learned in the previous grades, whose parameters remain unchanged in training of the current and future grades. By dividing the task of learning a DDN into learning several shallow neural networks, one can alleviate the severity of the nonconvexity of the original optimization problem of a large size. When all grades of the learning are completed, the final neural network learned is astair-shapeneural network, which is thesuperpositionof networks learned from all grades. Such a model enables us to learn a DDN much more effectively and efficiently. Moreover, multi-grade learning naturally leads to adaptive learning. We prove that in the context of function approximation if the neural network generated by a new grade is nontrivial, the optimal error of a new grade is strictly reduced from the optimal error of the previous grade. Furthermore, we provide numerical examples which confirm that the proposed multi-grade model outperforms significantly the standard single-grade model and is much more robust to noise than the single-grade model. They include three proof-of-concept examples, classification on two benchmark data sets MNIST and Fashion MNIST with two noise rates, which is to find classifiers, functions of 784 dimensions, and as well as numerical solutions of the one-dimensional Helmholtz equation.more » « less
-
Abstract One of the most important problems vexing the ΛCDM cosmological model is the Hubble tension. It arises from the fact that measurements of the present value of the Hubble parameter performed with low-redshift quantities, e.g. the Type IA supernova, tend to yield larger values than measurements from quantities originating at high-redshift, e.g. fits of cosmic microwave background radiation. It is becoming likely that the discrepancy, currently standing at 5σ, is not due to systematic errors in the measurements. Here we explore whether the self-interaction of gravitational fields in General Relativity, which are traditionally neglected when studying the evolution of the Universe, can contribute to explaining the tension. We find that with field self-interaction accounted for, both low- and high-redshift data aresimultaneouslywell-fitted, thereby showing that gravitational self-interaction yield consistentH0values when inferred from SnIA and cosmic microwave background observations. Crucially, this is achieved without introducing additional parameters.more » « less
An official website of the United States government

