Abstract Heuristics are essential for addressing the complexities of engineering design processes. The goodness of heuristics is context-dependent. Appropriately tailored heuristics can enable designers to find good solutions efficiently, and inappropriate heuristics can result in cognitive biases and inferior design outcomes. While there have been several efforts at understanding which heuristics are used by designers, there is a lack of normative understanding about when different heuristics are suitable. Towards addressing this gap, this paper presents a reinforcement learning-based approach to evaluate the goodness of heuristics for three sub-problems commonly faced by designers: (1) learning the map between the design space and the performance space, (2) acquiring sequential information, and (3) stopping the information acquisition process. Using a multi-armed bandit formulation and simulation studies, we learn the suitable heuristics for these individual sub-problems under different resource constraints and problem complexities. Additionally, we learn the optimal heuristics for the combined problem (i.e., the one composing all three sub-problems), and we compare them to ones learned at the sub-problem level. The results of our simulation study indicate that the proposed reinforcement learning-based approach can be effective for determining the quality of heuristics for different problems, and how the effectiveness of the heuristics changes as a function of the designer’s preference (e.g., performance versus cost), the complexity of the problem, and the resources available.
more »
« less
Evaluating Heuristics in Engineering Design: A Reinforcement Learning Approach
Heuristics are essential for addressing the complexities of engineering design processes. The goodness of heuristics is context-dependent. Appropriately tailored heuristics can enable designers to find good solutions efficiently, and inappropriate heuristics can result in cognitive biases and inferior design outcomes. While there have been several efforts at understanding which heuristics are used by designers, there is a lack of normative understanding about when different heuristics are suitable. Towards addressing this gap, this paper presents a reinforcement learning-based approach to evaluate the goodness of heuristics for three sub-problems commonly faced by designers while carrying out design under resource constraints: (i) learning the mapping between the design space and the performance space, (ii) sequential information acquisition in design, and (iii) decision to stop information acquisition. Using a multi-armed bandit formulation and simulation studies, we learn the heuristics that are suitable for these sub-problems under different resource constraints and problem complexities. The results of our simulation study indicate that the proposed reinforcement learning-based approach can be effective for determining the quality of heuristics for different sub-problems, and how the effectiveness of the heuristics changes as a function of the designer's preference (e.g., performance versus cost), the complexity of the problem, and the resources available.
more »
« less
- Award ID(s):
- 1728165
- PAR ID:
- 10282918
- Date Published:
- Journal Name:
- ASME IDETC
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Systems design involves decomposing a system into interconnected subsystems and allocating resources to teams responsible for designing each subsystem. The outcomes of the process depend on how well limited resources are allocated to different teams, and the strategy each team uses to design the subsystems. This article presents an approach based on hierarchical reinforcement learning (RL) to generate heuristics for solving complex design problems under resource constraints. The approach consists of formulating systems design problems as hierarchical multiarmed bandit (MAB) problems, where decisions are made at both the system level (allocating budget across subsystems) and the subsystem level (selecting heuristics for sequential information acquisition). The approach is demonstrated using an illustrative example of a race car optimization in The Open Racing Car Simulator (TORCS) environment. The results indicate that the RL agent can learn to allocate resources strategically, prioritize the subsystems with the greatest influence on overall performance, and identify effective information acquisition heuristics for each subsystem. For example, the RL agent learned to allocate a larger portion of the budget to the gearbox subsystem, which has a higher-dimensional design space compared to other subsystems. The results also indicate that the extracted heuristics lead to convergence to high-performing car configurations with greater efficiency when compared to using Bayesian optimization for design.more » « less
-
Abstract Engineering design relies heavily on heuristics, yet there is a lack of systematic methods for identifying and validating design heuristics. This paper introduces a computational approach to representing engineering design problems that involve decomposition and assignment decisions, facilitating systematic extraction of generalizable heuristics. We model design processes using a Markov Decision Process (MDP) framework, characterizing problems through attributes of the problem space, solver capabilities, and trade-offs embedded within preference functions. Reinforcement learning methods are employed to learn optimal policies, from which we extract inclusionary and exclusionary heuristics using Gaussian Mixture Models. The effectiveness of the approach is demonstrated through two case studies: solver-aware system architecting (SASA) for a robotic arm design and sequential information acquisition in parametric design optimization. The results highlight the context-dependent nature of learned heuristics, demonstrating how problem complexity, designer preferences, and solver characteristics influence their selection.more » « less
-
Abstract Design heuristics are traditionally used as qualitative principles to guide the design process, but they have also been used to improve the efficiency of design optimization. Using design heuristics as soft constraints or search operators has been shown for some problems to reduce the number of function evaluations needed to achieve a certain level of convergence. However, in other cases, enforcing heuristics can reduce diversity and slow down convergence. This paper studies the question of when and how a given set of design heuristics represented in different forms (soft constraints, repair operators, and biased sampling) can be utilized in an automated way to improve efficiency for a given design problem. An approach is presented for identifying promising heuristics for a given problem by estimating the overall impact of a heuristic based on an exploratory screening study. Two impact indices are formulated: weighted influence index and hypervolume difference index. Using this approach, the promising heuristics for four design problems are identified and the efficacy of selectively enforcing only these promising heuristics over both enforcement of all available heuristics and not enforcing any heuristics is benchmarked. In all problems, it is found that enforcing only the promising heuristics as repair operators enables finding good designs faster than by enforcing all available heuristics or not enforcing any heuristics. Enforcing heuristics as soft constraints or biased sampling functions results in improvements in efficiency for some of the problems. Based on these results, guidelines for designers to leverage heuristics effectively in design optimization are presented.more » « less
-
null (Ed.)Reinforcement Learning (RL) algorithms have had tremendous success in simulated domains. These algorithms, however, often cannot be directly applied to physical systems, especially in cases where there are constraints to satisfy (e.g. to ensure safety or limit resource consumption). In standard RL, the agent is incentivized to explore any policy with the sole goal of maximizing reward; in the real world, however, ensuring satisfaction of certain constraints in the process is also necessary and essential. In this article, we overview existing approaches addressing constraints in model-free reinforcement learning. We model the problem of learning with constraints as a Constrained Markov Decision Process and consider two main types of constraints: cumulative and instantaneous. We summarize existing approaches and discuss their pros and cons. To evaluate policy performance under constraints, we introduce a set of standard benchmarks and metrics. We also summarize limitations of current methods and present open questions for future research.more » « less
An official website of the United States government

