skip to main content


Title: Finding Better Local Optima in Topology Optimization via Tunneling

Topology optimization problems are typically non-convex, and as such, multiple local minima exist. Depending on the initial design, the type of optimization algorithm and the optimization parameters, gradient-based optimizers converge to one of those minima. Unfortunately, these minima can be highly suboptimal, particularly when the structural response is very non-linear or when multiple constraints are present. This issue is more pronounced in the topology optimization of geometric primitives, because the design representation is more compact and restricted than in free-form topology optimization. In this paper, we investigate the use of tunneling in topology optimization to move from a poor local minimum to a better one. The tunneling method used in this work is a gradient-based deterministic method that finds a better minimum than the previous one in a sequential manner. We demonstrate this approach via numerical examples and show that the coupling of the tunneling method with topology optimization leads to better designs.

 
more » « less
Award ID(s):
1751211
NSF-PAR ID:
10197571
Author(s) / Creator(s):
;
Date Published:
Journal Name:
ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
Volume:
2B
Issue:
DETC2018-86116
Page Range / eLocation ID:
V02BT03A014
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    In recent years, machine learning (ML) techniques are seen to be promising tools to discover and design novel materials. However, the lack of robust inverse design approaches to identify promising candidate materials without exploring the entire design space causes a fundamental bottleneck. A general‐purpose inverse design approach is presented using generative inverse design networks. This ML‐based inverse design approach uses backpropagation to calculate the analytical gradients of an objective function with respect to design variables. This inverse design approach is capable of overcoming local minima traps by using backpropagation to provide rapid calculations of gradient information and running millions of optimizations with different initial values. Furthermore, an active learning strategy is adopted in the inverse design approach to improve the performance of candidate materials and reduce the amount of training data needed to do so. Compared to passive learning, the active learning strategy is capable of generating better designs and reducing the amount of training data by at least an order‐of‐magnitude in the case study on composite materials. The inverse design approach is compared with conventional gradient‐based topology optimization and gradient‐free genetic algorithms and the pros and cons of each method are discussed when applied to materials discovery and design problems.

     
    more » « less
  2. We consider distributed optimization under communication constraints for training deep learning models. We propose a new algorithm, whose parameter updates rely on two forces: a regular gradient step, and a corrective direction dictated by the currently best-performing worker (leader). Our method differs from the parameter-averaging scheme EASGD in a number of ways:(i) our objective formulation does not change the location of stationary points compared to the original optimization problem;(ii) we avoid convergence decelerations caused by pulling local workers descending to different local minima to each other (ie to the average of their parameters);(iii) our update by design breaks the curse of symmetry (the phenomenon of being trapped in poorly generalizing sub-optimal solutions in symmetric non-convex landscapes); and (iv) our approach is more communication efficient since it broadcasts only parameters of the leader rather than all workers. We provide theoretical analysis of the batch version of the proposed algorithm, which we call Leader Gradient Descent (LGD), and its stochastic variant (LSGD). Finally, we implement an asynchronous version of our algorithm and extend it to the multi-leader setting, where we form groups of workers, each represented by its own local leader (the best performer in a group), and update each worker with a corrective direction comprised of two attractive forces: one to the local, and one to the global leader (the best performer among all workers). The multi-leader setting is well-aligned with current hardware architecture, where local workers forming a group lie within a single computational node and … 
    more » « less
  3. Summary

    We present an original method for multimaterial topology optimization with elastic and thermal response considerations. The material distribution is represented parametrically using a formulation in which finite element–style shape functions are used to determine the local material properties within each finite element. We optimize a multifunctional structure that is designed for a combination of structural stiffness and thermal insulation. We conduct parallel uncoupled finite element analyses to simulate the elastic and thermal response of the structure by solving the two‐dimensional Poisson problem. We explore multiple optimization problem formulations, including structural design for minimum compliance subject to local temperature constraints so that the optimized design serves as both a support structure and a thermal insulator. We also derive and implement an original multimaterial aggregation function that allows the designer to simultaneously enforce separate maximum temperature thresholds based upon the melting point of the various design materials. The nonlinear programming problem is solved using gradient‐based optimization with adjoint sensitivity analysis. We present results for a series of two‐dimensional example problems. The results demonstrate that the proposed algorithm consistently converges to feasible multimaterial designs with the desired elastic and thermal performance.

     
    more » « less
  4. One of the most common types of models that helps us to understand neuron behavior is based on the Hodgkin–Huxley ion channel formulation (HH model). A major challenge with inferring parameters in HH models is non-uniqueness: many different sets of ion channel parameter values produce similar outputs for the same input stimulus. Such phenomena result in an objective function that exhibits multiple modes (i.e., multiple local minima). This non-uniqueness of local optimality poses challenges for parameter estimation with many algorithmic optimization techniques. HH models additionally have severe non-linearities resulting in further challenges for inferring parameters in an algorithmic fashion. To address these challenges with a tractable method in high-dimensional parameter spaces, we propose using a particular Markov chain Monte Carlo (MCMC) algorithm, which has the advantage of inferring parameters in a Bayesian framework. The Bayesian approach is designed to be suitable for multimodal solutions to inverse problems. We introduce and demonstrate the method using a three-channel HH model. We then focus on the inference of nine parameters in an eight-channel HH model, which we analyze in detail. We explore how the MCMC algorithm can uncover complex relationships between inferred parameters using five injected current levels. The MCMC method provides as a result a nine-dimensional posterior distribution, which we analyze visually with solution maps or landscapes of the possible parameter sets. The visualized solution maps show new complex structures of the multimodal posteriors, and they allow for selection of locally and globally optimal value sets, and they visually expose parameter sensitivities and regions of higher model robustness. We envision these solution maps as enabling experimentalists to improve the design of future experiments, increase scientific productivity and improve on model structure and ideation when the MCMC algorithm is applied to experimental data. 
    more » « less
  5. Abstract

    Topology optimization by optimally distributing materials in a given domain requires non-gradient optimizers to solve highly complicated problems. However, with hundreds of design variables or more involved, solving such problems would require millions of Finite Element Method (FEM) calculations whose computational cost is huge and impractical. Here we report Self-directed Online Learning Optimization (SOLO) which integrates Deep Neural Network (DNN) with FEM calculations. A DNN learns and substitutes the objective as a function of design variables. A small number of training data is generated dynamically based on the DNN’s prediction of the optimum. The DNN adapts to the new training data and gives better prediction in the region of interest until convergence. The optimum predicted by the DNN is proved to converge to the true global optimum through iterations. Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization. It reduced the computational time by 2 ~ 5 orders of magnitude compared with directly using heuristic methods, and outperformed all state-of-the-art algorithms tested in our experiments. This approach enables solving large multi-dimensional optimization problems.

     
    more » « less