Sustainability is increasingly recognized as a critical global issue. Multi-objective optimization is an important approach for sustainable decision-making, but problems with four or more objectives are hard to interpret due to its high dimensions. In our groups previous work, an algorithm capable of systematically reducing objective dimensionality for (mixed integer) linear Problem has been developed. In this work, we will extend the algorithm to tackle nonlinear many-objective problems. An outer approximation-like method is employed to systematically replace nonlinear objectives and constraints. After converting the original nonlinear problem to linear one, previous linear algorithm can be applied to reduce the dimensionality. The benchmark DTLZ5(I, M) problem set is used to evaluate the effectiveness of this approach. Our algorithm demonstrates the ability to identify appropriate objective groupings on benchmark problems of up to 20 objectives when algorithm hyperparameters are appropriately chosen. We also conduct extensive testing on the hyperparameters to determine their optimal settings. Additionally, we analyze the computation time required for different components of the algorithm, ensuring efficiency and practical applicability.
more »
« less
Dimensionality Reduction in Optimal Process Design with Many Uncertain Sustainability Objectives
The study of sustainable design has gained prominence in response to the growing emphasis on environmental and social impacts of critical infrastructure. Addressing the different dimensions inherent in sustainability issues necessitates the application of many-objective optimization techniques. In this work, an illustrative four-objective design system is formulated, wherein uncertainties lie within two different socially-oriented objectives. A stochastic community detection approach is proposed to identify robust groupings of objectives. The findings reveal that the modularity of the optimal solution surpasses that of the average graph, thus demonstrating the efficacy of the proposed approach. Furthermore, a comprehensive exploration of the Pareto frontiers for both the robust and single-scenario best groupings is undertaken, demonstrating that using the robust grouping results in little to no information loss about tradeoffs.
more »
« less
- Award ID(s):
- 2237284
- PAR ID:
- 10576046
- Publisher / Repository:
- Living Archive for Process Systems Engineering
- Date Published:
- Page Range / eLocation ID:
- 920 to 926
- Format(s):
- Medium: X
- Location:
- Breckenridge, Colorado, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Machine learning system design frequently necessitates balancing multiple objectives, such as prediction error and energy consumption, for deep neural networks (DNNs). Typically, no single design performs well across all objectives; thus, finding Pareto-optimal designs is of interest. Measuring different objectives frequently incurs different costs; for example, measuring the prediction error of DNNs is significantly more expensive than measuring the energy consumption of a pre-trained DNN because it requires re-training the DNN. Current state-of-the-art methods do not account for this difference in objective evaluation cost, potentially wasting costly evaluations of objective functions for little information gain. To address this issue, we propose a novel cost-aware decoupled approach that weights the improvement of the hypervolume of the Pareto region by the measurement cost of each objective. To evaluate our approach, we perform experiments on several machine learning systems deployed on energy constraints environments.more » « less
-
Machine learning system design frequently necessitates balancing multiple objectives, such as prediction error and energy consumption for deep neural networks (DNNs). Typically, no single design performs well across all objectives; thus, finding Pareto-optimal designs is of interest. Measuring different objectives frequently incurs different costs; for example, measuring the prediction error of DNNs is significantly more expensive than measuring the energy consumption of a pre-trained DNN because it requires re-training the DNN. Current state-of-the-art methods do not account for this difference in objective evaluation cost, potentially wasting costly evaluations of objective functions for little information gain. To address this issue, we propose a novel cost-aware decoupled approach that weights the improvement of the hypervolume of the Pareto region by the measurement cost of each objective. We perform experiments on a of range of DNN applications for comprehensive evaluation of our approach.more » « less
-
Linear discriminant analysis (LDA) is widely used for dimensionality reduction under supervised learning settings. Traditional LDA objective aims to minimize the ratio of squared Euclidean distances that may not perform optimally on noisy data sets. Multiple robust LDA objectives have been proposed to address this problem, but their implementations have two major limitations. One is that their mean calculations use the squared l2-norm distance to center the data, which is not valid when the objective does not use the Euclidean distance. The second problem is that there is no generalized optimization algorithm to solve different robust LDA objectives. In addition, most existing algorithms can only guarantee the solution to be locally optimal, rather than globally optimal. In this paper, we review multiple robust loss functions and propose a new and generalized robust objective for LDA. Besides, to better remove the mean value within data, our objective uses an optimal way to center the data through learning. As one important algorithmic contribution, we derive an efficient iterative algorithm to optimize the resulting non-smooth and non-convex objective function. We theoretically prove that our solution algorithm guarantees that both the objective and the solution sequences converge to globally optimal solutions at a sub-linear convergence rate. The experimental results demonstrate the effectiveness of our new method, achieving significant improvements compared to the other competing methods.more » « less
-
Abstract The design of alloys for use in gas turbine engine blades is a complex task that involves balancing multiple objectives and constraints. Candidate alloys must be ductile at room temperature and retain their yield strength at high temperatures, as well as possess low density, high thermal conductivity, narrow solidification range, high solidus temperature, and a small linear thermal expansion coefficient. Traditional Integrated Computational Materials Engineering (ICME) methods are not sufficient for exploring combinatorially-vast alloy design spaces, optimizing for multiple objectives, nor ensuring that multiple constraints are met. In this work, we propose an approach for solving a constrained multi-objective materials design problem over a large composition space, specifically focusing on the Mo-Nb-Ti-V-W system as a representative Multi-Principal Element Alloy (MPEA) for potential use in next-generation gas turbine blades. Our approach is able to learn and adapt to unknown constraints in the design space, making decisions about the best course of action at each stage of the process. As a result, we identify 21 Pareto-optimal alloys that satisfy all constraints. Our proposed framework is significantly more efficient and faster than a brute force approach.more » « less
An official website of the United States government

