skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


Title: Opening the Black Box: Automated Software Analysis for Algorithm Selection
Impressive performance improvements have been achieved in many areas of AI by meta-algorithmic techniques, such as automated algorithm selection and configuration. However, existing techniques treat the target algorithms they are applied to as black boxes – nothing is known about their inner workings. This allows meta-algorithmic techniques to be used broadly, but leaves untapped potential performance improvements enabled by information gained from a deeper analysis of the target algorithms. In this paper, we open the black box without sacrificing universal applicability of meta-algorithmic techniques by automatically analyzing algorithms. We show how to use this information to perform algorithm selection, and demonstrate improved performance compared to previous approaches that treat algorithms as black boxes.  more » « less
Award ID(s):
1813537
NSF-PAR ID:
10470674
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Proceedings of the First International Conference on Automated Machine Learning, PMLR 188:6
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Algorithms to solve hard combinatorial problems often exhibit complementary performance, i.e. where one algorithm fails, another shines. Algorithm portfolios and algorithm selection take advantage of this by running all algorithms in parallel or choosing the best one to run on a problem instance. In this paper, we show that neither of these approaches gives the best possible performance and propose the happy medium of running a subset of all algorithms in parallel. We propose a method to choose this subset automatically for each problem instance, and demonstrate empirical improvements of up to 19% in terms of runtime, 81% in terms of misclassification penalty, and 26% in terms of penalized averaged runtime on scenarios from the ASlib benchmark library. Unlike all other algorithm selection and scheduling approaches in the literature, our performance measures are based on the actual performance for algorithms running in parallel rather than assuming overhead-free parallelization based on sequential performance. Our approach is easy to apply in practice and does not require to solve hard problems to obtain a schedule, unlike other techniques in the literature, while still delivering superior performance. 
    more » « less
  2. Abstract

    Ultrathin meta-optics offer unmatched, multifunctional control of light. Next-generation optical technologies, however, demand unprecedented performance. This will likely require design algorithms surpassing the capability of human intuition. For the adjoint method, this requires explicitly deriving gradients, which is sometimes challenging for certain photonics problems. Existing techniques also comprise a patchwork of application-specific algorithms, each focused in scope and scatterer type. Here, we leverage algorithmic differentiation as used in artificial neural networks, treating photonic design parameters as trainable weights, optical sources as inputs, and encapsulating device performance in the loss function. By solving a complex, degenerate eigenproblem and formulating rigorous coupled-wave analysis as a computational graph, we support both arbitrary, parameterized scatterers and topology optimization. With iteration times below the cost of two forward simulations typical of adjoint methods, we generate multilayer, multifunctional, and aperiodic meta-optics. As an open-source platform adaptable to other algorithms and problems, we enable fast and flexible meta-optical design.

     
    more » « less
  3. Meta-learning has enabled learning statistical models that can be quickly adapted to new prediction tasks. Motivated by use-cases in personalized federated learning, we study the often overlooked aspect of the modern meta-learning algorithms -- their data efficiency. To shed more light on which methods are more efficient, we use techniques from algorithmic stability to derive bounds on the transfer risk that have important practical implications, indicating how much supervision is needed and how it must be allocated for each method to attain the desired level of generalization. Further, we introduce a new simple framework for evaluating meta-learning methods under a limit on the available supervision, conduct an empirical study of MAML, Reptile, and Protonets, and demonstrate the differences in the behavior of these methods on few-shot and federated learning benchmarks. Finally, we propose active meta-learning, which incorporates active data selection into learning-to-learn, leading to better performance of all methods in the limited supervision regime. 
    more » « less
  4. Anytime algorithms enable intelligent systems to trade computation time with solution quality. To exploit this crucial ability in real-time decision-making, the system must decide when to interrupt the anytime algorithm and act on the current solution. Existing meta-level control techniques, however, address this problem by relying on significant offline work that diminishes their practical utility and accuracy. We formally introduce an online performance prediction framework that enables meta-level control to adapt to each instance of a problem without any preprocessing. Using this framework, we then present a meta-level control technique and two stopping conditions. Finally, we show that our approach outperforms existing techniques that require substantial offline work. The result is efficient nonmyopic meta-level control that reduces the overhead and increases the benefits of using anytime algorithms in intelligent systems.

     
    more » « less
  5. The comprehensive properties of high-entropy alloys (HEAs) are highly-dependent on their phases. Although a large number of machine learning (ML) algorithms has been successfully applied to the phase prediction of HEAs, the accuracies among different ML algorithms based on the same dataset vary significantly. Therefore, selection of an efficient ML algorithm would significantly reduce the number and cost of the experiments. In this work, phase prediction of HEAs (PPH) is proposed by integrating criterion and machine learning recommendation method (MLRM). First, a meta-knowledge table based on characteristics of HEAs and performance of candidate algorithms is established, and meta-learning based on the meta-knowledge table is adopted to recommend an algorithm with desirable accuracy. Secondly, an MLRM based on improved meta-learning is engineered to recommend a more desirable algorithm for phase prediction. Finally, considering poor interpretability and generalization of single ML algorithms, a PPH combining the advantages of MLRM and criterion is proposed to improve the accuracy of phase prediction. The PPH is validated by 902 samples from 12 datasets, including 405 quinary HEAs, 359 senary HEAs, and 138 septenary HEAs. The experimental results shows that the PPH achieves performance than the traditional meta-learning method. The average prediction accuracy of PPH in all, quinary, senary, and septenary HEAs is 91.6%, 94.3%, 93.1%, and 95.8%, respectively. 
    more » « less