skip to main content


This content will become publicly available on March 25, 2025

Title: Rectangle Search: An Anytime Beam Search
Anytime heuristic search algorithms try to find a (potentially suboptimal) solution as quickly as possible and then work to find better and better solutions until an optimal solution is obtained or time is exhausted. The most widely-known anytime search algorithms are based on best-first search. In this paper, we propose a new algorithm, rectangle search, that is instead based on beam search, a variant of breadth-first search. It repeatedly explores alternatives at all depth levels and is thus best-suited to problems featuring deep local minima. Experiments using a variety of popular search benchmarks suggest that rectangle search is competitive with fixed-width beam search and often performs better than the previous best anytime search algorithms.  more » « less
Award ID(s):
2008594
PAR ID:
10546024
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
AAAI Press
Date Published:
Journal Name:
Proceedings of the AAAI Conference on Artificial Intelligence
Volume:
38
Issue:
18
ISSN:
2159-5399
Page Range / eLocation ID:
20751 to 20758
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The Pareto-optimal frontier for a bi-objective search problem instance consists of all solutions that are not worse than any other solution in both objectives. The size of the Pareto-optimal frontier can be exponential in the size of the input graph, and hence finding it can be hard. Some existing works leverage a user-specified approximation factor ε to compute an approximate Pareto-optimal frontier that can be significantly smaller than the Pareto-optimal frontier. In this paper, we propose an anytime approximate bi-objective search algorithm, called Anytime Bi-Objective A*-ε (A-BOA*ε). A-BOA*ε is useful when deliberation time is limited. It first finds an approximate Pareto-optimal frontier quickly, iteratively improves it while time allows, and eventually finds the Pareto-optimal frontier. It efficiently reuses the search effort from previous iterations and makes use of a novel pruning technique. Our experimental results show that A-BOA*ε substantially outperforms baseline algorithms that do not reuse previous search effort, both in terms of runtime and number of node expansions. In fact, the most advanced variant of A-BOA*ε even slightly outperforms BOA*, a state-of-the-art bi-objective search algorithm, for finding the Pareto-optimal frontier. Moreover, given only a limited amount of deliberation time, A-BOA*ε finds solutions that collectively approximate the Pareto-optimal frontier much better than the solutions found by BOA*. 
    more » « less
  2. Finding optimal solutions to state-space search problems often takes too long, even when using A* with a heuristic function. Instead, practitioners often use a tunable approach, such as weighted A*, that allows them to adjust a trade-off between search time and solution cost until the search is sufficiently fast for the intended application. In this paper, we study algorithms for this problem setting, which we call `tunable suboptimal search'. We introduce a simple baseline, called Speed*, that uses distance-to-go information to speed up search. Experimental results on standard search benchmarks suggest that 1) bounded-suboptimal searches suffer overhead due to enforcing a suboptimality bound, 2) beam searches can perform well, but fare poorly in domains with dead-ends, and 3) Speed* provides robust overall performance. 
    more » « less
  3. In domains where measures of utility for automatically-designed artefacts (or agents performing subjective tasks) are difficult or impossible to mathematically describe (such as ‘be interesting’), human interactive search algorithms are an attractive alternative. However, despite notable achievements, they are still designed around a specific search method, resulting in a lack of problem generality: applying a new search algorithm requires an excessive amount of redesign such that an altogether new interactive method is formed in the process. This leads to missed opportunities for human interactive methods to utilize the power of state of the art optimization algorithms. Here, we introduce for the first time a framework for human interactive optimization that is agnostic to both the search method and the application problem. Using 13 different search methods on 24 fitness functions commonly found in evolutionary algorithm benchmarks, we show that our approach works on the majority of tested applications: many of the search methods, provided with access to the fitness functions, performed no better than our framework, which employs surrogate human participants who act as less informed and erroneous representations of the fitness function. In this way, our framework for interactive optimization provides a scalable solution by facilitating the integration of numerous types of current state of the art or future search algorithms. Future work will involve generalizing this method to admit multi-objective optimization methods and validation with human participants. 
    more » « less
  4. In multi-objective search, edges are annotated with cost vectors consisting of multiple cost components. A path dominates another path with the same start and goal vertices iff the component-wise sum of the cost vectors of the edges of the former path is 'less than' the component-wise sum of the cost vectors of the edges of the latter path. The Pareto-optimal solution set is the set of all undominated paths from a given start vertex to a given goal vertex. Its size can be exponential in the size of the graph being searched, which makes multi-objective search time-consuming. In this paper, we therefore study how to find an approximate Pareto-optimal solution set for a user-provided vector of approximation factors. The size of such a solution set can be significantly smaller than the size of the Pareto-optimal solution set, which enables the design of approximate multi-objective search algorithms that are efficient and produce small solution sets. We present such an algorithm in this paper, called A*pex. A*pex builds on PPA*, a state-of-the-art approximate bi-objective search algorithm (where there are only two cost components) but (1) makes PPA* more efficient for bi-objective search and (2) generalizes it to multi-objective search for any number of cost components. We first analyze the correctness of A*pex and then experimentally demonstrate its efficiency advantage over existing approximate algorithms for bi- and tri-objective search. 
    more » « less
  5. There are many settings that extend the basic shortest-path search problem. In Bounded-Cost Search, we are given a constant bound, and the task is to find a solution within the bound. In Bi-Objective Search, each edge is associated with two costs (objectives), and the task is to minimize both objectives. In this paper, we combine both settings into a new setting of Bounded-Cost Bi-Objective Search. We are given two bounds, one for each objective, and the task is to find a solution within these bounds. We provide a scheme for normalizing the two objectives, introduce several algorithms for this new setting and compare them experimentally. 
    more » « less