April 16, 2018

Benchmark of Surrogate-Based Optimization Algorithms for Computationally Expensive Problems

Introduction

In many practical optimization cases, computation of the objective is extremely time-consuming and laborious. Optimization problems of this kind arise in almost all engineering and scientific applications. A common practice is to use a surrogate model to reduce computational efforts minimizing a total number of direct calls of costly objective function. Surrogate model of the objective function allows finding prospective areas of the design space. Then actual objective values are calculated only for the most promising designs. The framework is called Surrogate-Based Optimization (SBO). It is a basis of numerous modern optimization strategies, the most prominent example of which is well-known Efficient Global Optimization Algorithm (EGO)[1].

pSeven provides advanced tools for effective design optimization making the best possible use of the limited computational resources and handling optimization problems with computationally expensive objective functions. The primary goal of this benchmark is to assess pSeven algorithms' efficiency while solving a wide variety of test optimization problems. The set of test problems was formed by general assumptions reflecting the features of real-world optimization problems. Side open-source SBO algorithms were also considered to make the results more informative and to give some relative estimations of our algorithms' efficiency. The comparative algorithms represent well known traditional approaches to solving highly time-consuming optimization problems.

Optimization Algorithms

1. pSeven

pSeven provides global optimization algorithms aimed at solving expensive optimization problems. Surrogate-based optimization in pSeven is based on Gaussian processes (GP) modeling technique[2, 3]. However, it implements numerous tweaks and changes with respect to conventional formulations, including adaptive configuration with SmartSelection, that it is now hardly possible to identify pSeven SBO optimization stack as being traditional GP-based methodology. The general SBO framework deals with the exploration-exploitation trade-off by optimizing a particular criterion that takes both the value and the uncertainty of a surrogate model into account. However, the exploration may not always be necessary or affordable in some engineering problems. For such situations, we provide a Direct SBO (DSBO) approach[4] that fundamentally relies only on the exploitation meaning that it takes into account only the value of a surrogate model. The benchmark results were obtained for both General and Direct SBO. To activate DSBO, the Global Phase Intensity option was set to 0, default values were used for the rest of options.

2. Bayesian Optimization

Bayesian Optimization is a simple, straightforward implementation of SBO approach built upon Bayesian inference and Gaussian process[5]. The algorithm uses scikit-learn module to build Gaussian Process Regressor on a randomly generated design points. At each iteration, according to the model built, the point with a corresponding maximum value of so-called acquisition function is chosen to be probed next using the costly objective function. There are 3 acquisition functions available: Probability of Improvement, Expected Improvement and Upper Confidence Bound. The algorithm was run with the Expected Improvement acquisition function to imitate the original EGO algorithm using it. All the other settings were left at their default values. The termination criteria is simply reaching the maximum number of iterations, i.e., utilized optimization budget. In relation to the overall aims of the implementation simplicity, the number of points used to build the initial model should be selected manually. We used 20% of the computational budget for that purposes.

3. RBFOpt Library

In contrast to the trivial workflow of previous module RBFopt[6] implements advanced techniques of costly engineering optimization, including an adaptive selection of surrogate model type and automatic optimization budget managing. The algorithm is based on the Radial Basis Functions, which can be of several forms: linear, cubic, multiquadric, Gaussian and thin plate spline. At the initial step, the algorithm generates a training sample of points by a Latin Hypercube experimental design, maximizing minimum distance between the points. At each iteration exploration or exploitation strategy is chosen using the so-called bumpiness measure. The result of both is a design point to be evaluated next. Exploration stage aims to refine the model in largely unexplored areas of the design space. At exploitation phase, the algorithm intensifies the search to improve the best reached objective value according to the current state of the model. A genetic algorithm is used to perform a global search on the surrogate model. For the benchmark, we used the default values for all of the algorithm’s settings.

Performance Metric

Let’s call the maximum number of allowed objective function evaluations an optimization budget \(b\). The best reached objective value within the given optimization budget is denoted as \(f^*(b)\) and is used as a performance metric. The set of benchmark problems is composed of 30 test problems[7, 8] of various dimensionality \(N\) listed in Table 1 in the attached PDF file. To get a reasonable estimation of algorithm’s efficiency several runs with different budgets were performed for each algorithm solving each of test problems. The results obtained can be represented for each problem as the best-reached value versus computational budget graph (shown in Figure 1).

Figure 1. Optimization results of Rosenbrock function solved by 2 algorithms within different computational budgets

To summarize the results of all the problems solved and to provide a concise summary of algorithms efficiency we build performance profiles[9]. The \(f^*(b)\) metric can’t be used directly for that purposes since the problems have different scales of the objective function values. To have a baseline for comparison we define a performance rate of optimization algorithm as the normalized gap to the best-known solution \(f^{opt}\):

\(f_{norm}^*(b)=\frac{|f^*(b)-f^{opt}|}{(max(|f^{opt} |, 1.0))}.\)

The performance profile of optimization algorithm is intended to illustrate the fraction of problems that it is able to solve within each particular performance rate value. The optimization algorithm is deemed successful, i.e., reaches the optimal solution, if \(f_{norm}^*(b)\) < 10-6. In other words, the \(f^*(b)\) values close enough to \(f^{opt}\) are considered the same.

Results & Conclusion

Each test problem was solved three times with different optimization budgets permissible for real problems with expensive objective function: \(5N\), \(10N\), \(15N\). The performance profiles (shown in Figure 2) were built for each budget value. As we can see pSeven is well-suited for expensive optimization problems in cases of the restricted computational resources. According to performance profiles for the \(5N\) budget, the DSBO strategy might indeed be quite useful if the viable solution must be obtained in no time. The benchmark results are summarized in Table 2 in the attached PDF file.

Benchmark results insert_drive_file

Figure 2. Performance profiles of considered algorithms for each computational budget: 5N, 10N and 15N correspondingly. The closer the performance profile to the top left corner, the higher the probability of the corresponding algorithm's success and thus the more effective it is.

References

  1. Jones, Donald R., Matthias Schonlau, and William J. Welch. "Efficient global optimization of expensive black-box functions." Journal of Global optimization 13.4 1998.
  2. pSeven Core 6.12 SP 2 User Manual - GT Opt, DATADVANCE, 2018.
  3. F.V. Gubarev, A.I. Pospelov. "Towards Large-Scale Surrogate-Based Optimization." 4th International Conference of Engineering Optimization, 2014.
  4. A.I. Pospelov. New Direct SBO Technique for Design Optimization in pSeven. DATADVANCE, 2017.
  5. Snoek, Jasper, Hugo Larochelle, and Ryan P. Adams. "Practical Bayesian optimization of machine learning algorithms." Advances in neural information processing systems. 2012.
  6. Costa, Alberto, Giacomo Nannicini. "RBFOpt: an open-source library for black-box optimization with costly function evaluations." Optimization Online 4538 (2014).
  7. Jamil, Momin, Xin-She Yang. "A literature survey of benchmark functions for global optimisation problems." International Journal of Mathematical Modelling and Numerical Optimisation 4.2. 2013.
  8. Adorio, Ernesto P., U. Diliman. "Mvf-multivariate test functions library in c for unconstrained global optimization." Quezon City, Metro Manila, Philippines. 2005.
  9. Dolan, Elizabeth D., and Jorge J. Moré. "Benchmarking optimization software with performance profiles." Mathematical programming 91.2. 2002.

 

Interested in the solution?

Click to request a free 30-day demo.

Request demo