All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Related Documents

Share

Transcript

Improved scatter search for the global optimization of computationally expensive dynamic models
JOSE A. EGEA Process Engineering Group. Instituto de Investigaciones Marinas (C.S.I.C.) Eduardo Cabello 6, 36208 Vigo, Spain. jegea@iim.csic.es EMMANUEL VAZQUEZ Department of Signal and Electronic Systems, Supélec. Plateau de Moulon, 3 rue Joliot-Curie, 91192 Gif sur Yvette, France. emmanuel.vazquez@supelec.fr JULIO R. BANGA Process Engineering Group. Instituto de Investigaciones Marinas (C.S.I.C.) Eduardo Cabello 6, 36208 Vigo, Spain. julio@iim.csic.es RAFAEL MARTÍ Departamento de Estadística e Investigación Operativa. Universitat de València. Dr. Moliner 50, 46100 Burjassot (Valencia), Spain. rafael.marti@uv.es
Abstract
A new algorithm for global optimization of costly nonlinear continuous problems is presented in this paper. The algorithm is based on the scatter search metaheuristic, which has recently proved to be efficient for solving combinatorial and nonlinear optimization problems. A kriging-based prediction method has been coupled to the main optimization routine in order to discard the evaluation of solutions that are not likely to provide high quality function values. This makes the algorithm suitable for the optimization of computationally costly problems, as is illustrated in its application to two benchmark problems and its comparison with other algorithms.
KeyWords
Global Optimization, Expensive Functions, Scatter Search, Kriging.
1. Introduction
Many industrial and engineering problems can be formulated as optimization problems (Biegler and Grossmann 2004). These problems are often nonlinear and present dynamic behaviour due to their operating policies (i.e. batch or semi-batch operation) or to their inherent nonlinear dynamic nature (i.e. like in biotechnological processes, as reviewed by Banga et al. 2003). Further, in most real cases some specifications and/or constraints (which may also have nonlinear and/or dynamic nature) must be ensured. All these characteristics frequently result in non-convex problems, thus the use of global optimization methods becomes mandatory (Floudas et al. 2005). Another relevant feature of this kind of problem, which has been the subject of recent research, is the significant computation time required by each function evaluation. Indeed, due to the complexity of the mathematical models representing real processes, the simulation of a complex system can take from
1
minutes to hours in a standard workstation. Therefore, the use of some kind of surrogate model, which substitutes the srcinal one with enough accuracy, may help to alleviate this problem. Surrogate models are cheaper to evaluate, so their use will result in reductions of the total computation times, making them affordable from the industrial point of view. The taxonomy of global optimization methods based on response surfaces by Jones (2001a) states the problem and presents different methodologies to solve it. The most promising techniques to date seem to be kriging (being the most popular implementation the
EGO
algorithm of Jones et al. 1998) and interpolation by radial basis functions (
RBF’s
; Gutmann 2001). In this contribution, we present a methodology for the global optimization of (possibly dynamic, non-smooth) nonlinear problems with expensive evaluation. This methodology, and the associated software tool,
SSKm
(Scatter Search with Kriging for Matlab), is able to manage this class of problems by linking a scatter search method with a kriging interpolation. The metaheuristic known as scatter search (Laguna and Martí 2003) is an evolutionary method founded on the premise that systematic designs and methods for creating new solutions afford significant benefits beyond those derived from recourse to randomization. This methodology has been successfully applied to a wide array of hard optimization problems. Our new procedure is an extension of a recent advanced design of this methodology (Egea et al. 2007) and treats the objective function as a black box, making the search algorithm context-independent. The kriging predictor implemented in
SSKm
avoids the evaluation of solution vectors that are likely to provide low quality function values, thus efficiently reducing the number of simulations needed to find the vicinity of the global solution. The paper is organised as follows: Sections 2 and 3 present brief views of the general scatter search and kriging methodology respectively. Section 4 presents our algorithm
SSKm
explaining in detail its features. In section 5 illustrative examples of the algorithm application are presented, one of them being a real application of operational design of a waste water treatment plant (WWTP) benchmark. The final section contains the conclusions of this study.
2. Scatter Search
Scatter search (SS) was first introduced in Glover (1977) as a heuristic for integer programming. SS consists of five elements that can be implemented in different degrees of sophistication. The basic design to implement SS is based on the “five-method template” (Laguna and Martí 2003): A
Diversification Generation Method
to generate a collection of diverse trial solutions within the search space. An
Improvement Method
to transform a trial solution into one or more enhanced trial solutions A
Reference Set Update Method
to build and maintain a reference set consisting of the
b
“best” solutions found (where the value of
b
is typically small e.g. no more than 20). Solutions gain membership to the reference set according to their quality or their diversity. A
Subset Generation Method
to operate on the reference set, to produce several subsets of its solutions as a basis for creating combined solutions. A
Solution Combination Method
to transform a given subset of solutions produced by the Subset Generation Method into one or more combined solution vectors. Figure 1 illustrates the main steps of the SS algorithm. The circles represent solutions and the darker circles represent improved solutions resulting from the application of the Improvement Method The algorithm starts (SS Initialization) with the creation of an initial set of solutions
P
generated with the Diversification Generation Method, and then extracts from it the reference set (
Refset)
. The initial reference set is built according to the Reference Set Update Method, which takes the
b
/2 best solutions (as regards their quality in the problem solving) and the
b
/2 distinct and maximally diverse solutions from
P
to compose the
Refset
. Once the
Refset
has been built, its solutions are ordered according to quality. In this step, the Subset Generation Method creates sets of solutions in the
Refset
to be combined. In its simplest form, the Subset Generation Method generates all pairs of reference solutions. The sets of solutions in
Refset
are selected one at a time and the Solution Combination Method is applied to generate some trial solutions from each of those sets. These trial solutions are subjected to the Improvement Method. The Reference Set Update Method is applied once again to update the new
Refset
with the best solutions from the current
Refset
and the set of trial (possibly improved) solutions.
2
The SS Main Loop terminates after all the generated subsets are subjected to the Combination Method and none of the improved trial solutions are admitted to enter the
Refset
under the rules of the Reference Set Update Method. However, in advanced SS designs as this one shown in Figure 1, the
Refset
rebuilding is applied at this point keeping the best
b
/2 solutions in the
Refset
and selecting the other
b
/2 from
P
.
Repeat until |
P
| =
PSize
P
Diversification GenerationMethodSubset GenerationMethodImprovementMethodSolution CombinationMethodImprovementMethodNo more new solutionsReference SetUpdate Method
RefSet
Diversification GenerationMethodImprovementMethodStop if
MaxIter
reached
SS Main LoopSS Initialization
Repeat until |
P
| =
PSize
PP
Diversification GenerationMethodSubset GenerationMethodImprovementMethodSolution CombinationMethodImprovementMethodNo more new solutionsReference SetUpdate Method
RefSet RefSet
Diversification GenerationMethodImprovementMethodStop if
MaxIter
reached
SS Main LoopSS Initialization
Figure 1: Schematic representation of the SS design where the shaded circles represent solutions that have been subjected to the
Improvement Method
Of the five methods in SS methodology, only four are strictly required. The Improvement Method is usually needed if high quality outcomes are desired, but a SS procedure can be implemented without it as it occurs in some problems where the Improvement Method can not provide high quality solutions due to the problem’s nature or when the computation budget is limited to a small number of function evaluations. An advanced design of the SS methodology has recently been presented in Egea et al. (2007). Several strategies to surmount the problems arising in optimization problems from the biotechnological industry are implemented showing the flexibility of SS to be modified according to the difficulties of the problems to be solved. The algorithm presented in this paper is an extension of the method mentioned above, incorporating a kriging-based prediction mechanism. All these features are detailed in Section 4.
3. Kriging
The term
kriging
srcinates from geostatistics and the method was named and formalized by a French mathematician (Matheron 1963). Kriging can be defined as a probabilistic interpolation method to create cheap-to-evaluate surrogate models from scattered observations minimizing the expected squared prediction error subject to being unbiased and being linear in the observations (Jones 2001a). Many examples of kriging implementations that illustrate its superiority over other interpolation methods can be found in the literature (see for example Cox and John 1997, Jones et al. 1998, Sasena et al. 2002).
3
Consider a real function
f
to be interpolated. Assume that
f
is a sample path of a second-order Gaussian random process denoted by
F
. Kriging computes the best linear unbiased predictor of
F(x)
using the observations of
F
on a set of points S={
x
1
,…,x
n
}. Denote by
F
S
the vector of observations (
F(x
1
),…, F(x
n
)
)
T
. The Kriging predictor is a linear combination of the observations, which may be written as
S T
F x xF
)()(
ˆ
λ
=
(1) with
λ
(x)
a vector of coefficients
λ
1
,…,
λ
n
. These coefficients are chosen to obtain the smallest variance of prediction error among all unbiased predictors. This leads to a constrained minimization problem, which can be solved by a Lagrangian formulation (Matheron 1969). The vector
λ
(x)
can be computed as the solution of the system of linear equations
()()0()()
T
KAxkx Axax
λ µ
⎛ ⎞ ⎛ ⎞ ⎛ =⎜ ⎟ ⎜ ⎟ ⎜⎝ ⎠ ⎝ ⎠ ⎝ ⎞⎟ ⎠
(2) where
K
is the covariance matrix of the random vector
F
S
,
A
is a matrix of known functions
a
1
,…,a
q
(usually polynomials of low degree) evaluated at the points of
S
,
k
(
x
) is the covariance vector between
F(x)
and
F
S
,
a
(
x
) is the vector of
a
1
,…,a
q
evaluated at
x
,
µ
(x)
is a vector of Lagrangian multipliers and 0 is a matrix of zeros. Knowing the kriging coefficients, the predicted value of
f
given
f
S
= (
f(x
1
,…,f(x
n
)
)
T
can be written as
S T
f x x f
)()(
ˆ
λ
=
(3) The selection of a suitable covariance function is crucial for the success and accuracy of the kriging prediction. For this purpose, it is usual to choose a parameterized covariance model and to estimate its parameters based on the observations. The use of a stationary, isotropic covariance model with one parameter to adjust regularity makes it possible to model a large class of functions (Vazquez 2005). Here we use the Matérn covariance, with the following parameterization (Yaglom 1986, Stein 1999)
⎟⎟ ⎠ ⎞⎜⎜⎝ ⎛ Κ ⎟⎟ ⎠ ⎞⎜⎜⎝ ⎛ Γ=
−
ρ υ ρ υ υ σ
υ υ υ
hhhk
2 / 12 / 1
12
22)(2)(
(4) where
h
is the Euclidean distance between two points,
K
υ
is the modified Bessel function of the second kind,
υ
controls the regularity,
σ
2
is the variance and
ρ
represents the range of the covariance. One of the advantages of kriging is that the variance of the prediction error at
x
can be computed even without any evaluation of
f
. This is one of the strongest points of this method compared to others: kriging provides a statistical framework that gives an idea of the uncertainty associated to each prediction. This also helps us to know which points are worth evaluating in different applications of the method (for example, in global optimization). Figure 2 shows the kriging prediction of the sine function in the interval [-10 10]. The solid line is the real function whereas the dotted line is the kriging prediction based on the observations (dark circles). For a point
x
i
kriging provides a normal distribution function (dashed line). The mean of the distribution is the kriging prediction and the variance is also provided in the calculation process. With this distribution we can not only know which is the prediction in every point provided some observations but also the uncertainty associated to this prediction and thus the probability of finding a value lower than a threshold when evaluating the real function.
4
-10 -8 -6 -4 -2 0 2 4 6 8 10-1.5-1-0.500.511.5Real FunctionObservationsKriging PredictionProbability function forthe expected value of f(xi)
xi
Figure 2: Kriging prediction for the function
y = sin(x)
from a set of sampling points. The Gaussian distribution for point
x
i
provided by kriging is shown.
In Figure 3, a 2-dimensional function, the
six-hump camel-back function
, is presented. The real function
f
(
x
1
,
x
2
)
=
4
x
12
-
2.1
x
14
+ x
16
/3
+ x
1
x
2
-
4
x
22
+
4
x
24
within the interval [-5 5] is plotted in Figure 3a. Figures 3b, 3c and 3d plot the kriging prediction of the function in the same interval using
n
0
= 20, 50 and 100 observations (i.e. real function evaluations) uniformly distributed in the same interval respectively. It can be observed that the larger number of observations, the higher accuracy in the prediction.
Figure 3a:
Six-hump camel-back
function Figure 3b: Kriging prediction for
n
0
= 20 Figure 3c: Kriging prediction for
n
0
= 50 Figure 3d: Kriging prediction for
n
0
= 100
5

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.