skip to main content


Search for: All records

Creators/Authors contains: "Behjat, Amir"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Opportunistic Physics-mining Transfer Mapping Architecture (OPTMA) is a hybrid architecture that combines fast simplified physics models with neural networks in order to provide significantly improved generalizability and explainability compared to pure data-driven machine learning (ML) models. However, training OPTMA remains computationally inefficient due to its dependence on gradient-free solvers or back-propagation with supervised learning over expensively pre-generated labels. This paper presents two extensions of OPTMA that are not only more efficient to train through standard back-propagation but are readily deployable through the state-of-the-art library, PyTorch. The first extension, OPTMA-Net, presents novel manual reprogramming of the simplified physics model, expressing it in Torch tensor compatible form, thus naturally enabling PyTorch's in-built Auto-Differentiation to be used for training. Since manual reprogramming can be tedious for some physics models, a second extension called OPTMA-Dual is presented, where a highly accurate internal neural net is trained apriori on the fast simplified physics model (which can be generously sampled), and integrated with the transfer model. Both new architectures are tested on analytical test problems and the problem of predicting the acoustic field of an unmanned aerial vehicle. The interference of the acoustic pressure waves produced by multiple monopoles form the basis of the simplified physics for this problem statement. An indoor noise monitoring setup in motion capture environment provided the ground truth for target data. Compared to sequential hybrid and pure ML models, OPTMA-Net/Dual demonstrate several fold improvement in performing extrapolation, while providing orders of magnitude faster training times compared to the original OPTMA. 
    more » « less
  2. Topology and weight evolving artificial neural network (TWEANN) algorithms optimize the structure and weights of artificial neural networks (ANNs) simultaneously. The resulting networks are typically used as policy models for solving control and reinforcement learning (RL) type problems. This paper presents a neuroevolution algorithm that aims to address the typical stagnation and sluggish convergence issues present in other neuroevolution algorithms. These issues are often caused by inadequacies in population diversity preservation, exploration/exploitation balance, and search flexibility. This new algorithm, called the Adaptive Genomic Evolution of Neural-Network Topologies (AGENT), builds on the neuroevolution of augmenting topologies (NEAT) concept. Novel mechanisms for adapting the selection and mutation operations are proposed to favorably control population diversity and exploration/exploitation balance. The former is founded on a fundamentally new way of quantifying diversity by taking a graph-theoretic perspective of the population of genomes and inter-genomic differences. Further advancements to the NEAT paradigm occur through the incorporation of variable neuronal properties and new mutation operations that uniquely allow both the growth and pruning of ANN topologies during evolution. Numerical experiments with benchmark control problems adopted from the OpenAI Gym illustrate the competitive performance of AGENT against standard RL methods and adaptive HyperNEAT, and superiority over the original NEAT algorithm. Further parametric analysis provides key insights into the impact of the new features in AGENT. This is followed by evaluation on an unmanned aerial vehicle collision avoidance problem where maneuver planning models are learnt by AGENT with 33% reward improvement over 15 generations. 
    more » « less
  3. null (Ed.)
    Abstract Automated inverse design methods are critical to the development of metamaterial systems that exhibit special user-demanded properties. While machine learning approaches represent an emerging paradigm in the design of metamaterial structures, the ability to retrieve inverse designs on-demand remains lacking. Such an ability can be useful in accelerating optimization-based inverse design processes. This paper develops an inverse design framework that provides this capability through the novel usage of invertible neural networks (INNs). We exploit an INN architecture that can be trained to perform forward prediction over a set of high-fidelity samples and automatically learns the reverse mapping with guaranteed invertibility. We apply this INN for modeling the frequency response of periodic and aperiodic phononic structures, with the performance demonstrated on vibration suppression of drill pipes. Training and testing samples are generated by employing a transfer matrix method. The INN models provide competitive forward and inverse prediction performance compared to typical deep neural networks (DNNs). These INN models are used to retrieve approximate inverse designs for a queried non-resonant frequency range; the inverse designs are then used to initialize a constrained gradient-based optimization process to find a more accurate inverse design that also minimizes mass. The INN-initialized optimizations are found to be generally superior in terms of the queried property and mass compared to randomly initialized and inverse DNN-initialized optimizations. Particle swarm optimization with INN-derived initial points is then found to provide even better solutions, especially for the higher-dimensional aperiodic structures. 
    more » « less
  4. null (Ed.)