Self-optimizing efficiency of vapor compression cycles (VCCs) involves assigning multiple decision variables simultaneously in order to minimize power consumption while maintaining safe operating conditions. Due to the modeling complexity associated with cycle dynamics (and other smart building energy systems), online self-optimization requires algorithms that can safely and efficiently explore the search space in a derivative-free and model-agnostic manner. This makes Bayesian optimization (BO) a strong candidate for self-optimization. Unfortunately, classical BO algorithms ignore the relationship between consecutive optimizer candidates, resulting in jumps in the search space that can lead to fail-safe mechanisms being triggered, or undesired transient dynamics that violate operational constraints. To this end, we propose safe local search region (LSR)-BO, a global optimization methodology that builds on the BO framework while enforcing two types of safety constraints including black-box constraints on the output and LSR constraints on the input. We provide theoretical guarantees that under standard assumptions on the performance and constraint functions, LSR-BO guarantees constraints will be satisfied at all iterations with high probability. Furthermore, in the presence of only input LSR constraints, we show the method will converge to the true (unknown) globally optimal solution. We demonstrate the potential of our proposed LSR-BO method on a high-fidelity simulation model of a commercial vapor compression system with both LSR constraints on expansion valve positions and fan speeds, in addition to other safety constraints on discharge and evaporator temperatures. 
                        more » 
                        « less   
                    This content will become publicly available on July 24, 2026
                            
                            Finding Interior Optimum of Black-box Constrained Objective with Bayesian Optimization
                        
                    
    
            Optimizing objectives under constraints, where both the objectives and constraints are black box functions, is a common scenario in real-world applications such as the design of medical therapies, industrial process optimization, and hyperparameter optimization. One popular approach to handle these complex scenarios is Bayesian Optimization (BO). However, when it comes to the theoretical understanding of constrained Bayesian optimization (CBO), the existing framework often relies on heuristics, approximations, or relaxation of objectives and, therefore, lacks the same level of theoretical guarantees as in canonical BO. In this paper, we exclude the boundary candidates that could be compromised by noise perturbation and aim to find the interior optimum of the black-box-constrained objective. We rely on the insight that optimizing the objective and learning the constraints can both help identify the high-confidence regions of interest (ROI) that potentially contain the interior optimum. We propose an efficient CBO framework that intersects the ROIs identified from each aspect on a discretized search space to determine the general ROI. Then, on the ROI, we optimize the acquisition functions, balancing the learning of the constraints and the optimization of the objective. We showcase the efficiency and robustness of our proposed CBO framework through the high probability regret bounds for the algorithm and extensive empirical evidence. 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10618146
- Publisher / Repository:
- The 41st Conference on Uncertainty in Artificial Intelligence
- Date Published:
- Format(s):
- Medium: X
- Location:
- Rio de Janeiro, Brazil
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Bayesian optimization (BO) is a powerful paradigm for optimizing expensive black-box functions. Traditional BO methods typically rely on separate hand-crafted acquisition functions and surrogate models for the underlying function, and often operate in a myopic manner. In this paper, we propose a novel direct regret optimization approach that jointly learns the optimal model and non-myopic acquisition by distilling from a set of candidate models and acquisitions, and explicitly targets minimizing the multi-step regret. Our framework leverages an ensemble of Gaussian Processes (GPs) with varying hyperparameters to generate simulated BO trajectories, each guided by an acquisition function chosen from a pool of conventional choices, until a Bayesian early stop criterion is met. These simulated trajectories, capturing multi-step exploration strategies, are used to train an end-to-end decision transformer that directly learns to select next query points aimed at improving the ultimate objective. We further adopt a dense training–sparse learning paradigm: The decision transformer is trained offline with abundant simulated data sampled from ensemble GPs and acquisitions, while a limited number of real evaluations refine the GPs online. Experimental results on synthetic and real-world benchmarks suggest that our method consistently outperforms BO baselines, achieving lower simple regret and demonstrating more robust exploration in high-dimensional or noisy settings.more » « less
- 
            Optimizing a black-box function that is expensive to evaluate emerges in a gamut of machine learning and artifcial intelligence applications including drug discovery, policy optimization in robotics, and hyperparameter tuning of learning models to list a few. Bayesian optimization (BO) provides a principled framework to fnd the global optimum of such functions using a limited number of function evaluations. BO relies on a statistical surrogate model to actively select new query points, that is typically captured by a Gaussian process (GP). Unlike most existing approaches that hinge on a single GP surrogate model with a pre-selected kernel function that may confne the expressiveness of the sought function especially under the limited evaluation budget, the present work puts forth a weighted ensemble of GPs as a surrogate model. Building on the advocated Gaussian mixture (GM) posterior, the EGP framework adapts to the most ftted surrogate model as data arrive on-the-fy, offering a richer function space. For the acquisition of next evaluation points, the EGP-based posterior is coupled with an adaptive expected improvement (EI) criterion to balance exploration and exploitation of the search space. Numerical tests on a set of benchmark synthetic functions and two robotic tasks, demonstrate the impressive benefts of the proposed approach.more » « less
- 
            Bayesian optimization (BO) has well-documented merits for optimizing black-box functions with an expensive evaluation cost. Such functions emerge in applications as diverse as hyperparameter tuning, drug discovery, and robotics. BO hinges on a Bayesian surrogate model to sequentially select query points so as to balance exploration with exploitation of the search space. Most existing works rely on a single Gaussian process (GP) based surrogate model, where the kernel function form is typically preselected using domain knowledge. To bypass such a design process, this paper leverages an ensemble (E) of GPs to adaptively select the surrogate model fit on-the-fly, yielding a GP mixture posterior with enhanced expressiveness for the sought function. Acquisition of the next evaluation input using this EGP-based function posterior is then enabled by Thompson sampling (TS) that requires no additional design parameters. To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model. The novel EGP-TS readily accommodates parallel operation. To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret for both sequential and parallel settings. Tests on synthetic functions and real-world applications showcase the merits of the proposed method.more » « less
- 
            Abstract The design of materials and identification of optimal processing parameters constitute a complex and challenging task, necessitating efficient utilization of available data. Bayesian Optimization (BO) has gained popularity in materials design due to its ability to work with minimal data. However, many BO-based frameworks predominantly rely on statistical information, in the form of input-output data, and assume black-box objective functions. In practice, designers often possess knowledge of the underlying physical laws governing a material system, rendering the objective function not entirely black-box, as some information is partially observable. In this study, we propose a physics-informed BO approach that integrates physics-infused kernels to effectively leverage both statistical and physical information in the decision-making process. We demonstrate that this method significantly improves decision-making efficiency and enables more data-efficient BO. The applicability of this approach is showcased through the design of NiTi shape memory alloys, where the optimal processing parameters are identified to maximize the transformation temperature.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
