6.
The hybridization of EAs is popular, partly due to its better performance in han-
dling noise, uncertainty, vagueness, and imprecision [4, 5]. For simplicity here,
instead of using HM, we prefer to use the general term hybrid algorithms to refer to
the similar notion.
There are in fact two prominent issues of EAs in solving global and highly
nonconvex optimization problem. These are:
(i) Premature convergence: The problem of premature convergence results in the
lack of accuracy of the ﬁnal solution. The ﬁnal solution is a feasible solution
close to the global optimal, often regarded as satisfactory or close-to-optimal
solution.
(ii) Slow convergence: Slow convergence means the solution quality does not
improve sufﬁciently quickly. It shows stagnation or almost ﬂat on a conver-
gence graph (either a single iteration or the average of multiple iterations).
These two issues above are often partly related to the solution diversity that an
algorithm can produce in the searching process. In nature, the diversity is main-
tained by the variety (quality) and abundance (quantity) of organisms at a given
place and time [6], and this principle is applicable to EAs. At the beginning of a
search process, usually diversity is high, and it decreases as the population may
move towards the global optimum. High diversity may provide better guarantee to
ﬁnd the optimal solution with better accuracy, but this will usually lead to slow
convergence, and thus there are some tradeoffs between convergence and accuracy.
On the other hand, low diversity may lead to fast convergence while sacriﬁcing the
guarantee to ﬁnd global optimality and with poor solution accuracy. This scenario is
illustrated in Fig. 1. Hereby, we call the intersection between the convergence rate
and accuracy as the tradeoff point. Obviously, this is an ideally approach, and the
real-world is much grayer and things are not clearly cut.
It is worth pointing out that the diversity is one factor, which is related to the
more general concept of exploration and exploitation. High diversity encourages
exploration, and low diversity does not necessarily mean exploitation because
exploitation requires the use of landscape information and the information extracted
from the population during the search process. Even with sufﬁcient diversity, there
is no guarantee to solve the convergence problem because convergence is a much
complicated issue. Simply balancing the exploration and exploitation may make an
algorithm work to its best capability, but this does not necessarily mean its con-
vergence rate is high. Good convergence requires clever exploitation at the right
time and at the right place, which is still an open problem.
In addition, it is widely known that one prominent factor for premature con-
vergence is the lack of diversity. Therefore, in order to escape and jump out of local
optima, proper diversity should be maintained during the search process, even with
the expense of slower convergence. To enable this, hybridization has been a widely
acceptable strategy to promote diversity along the search for the global optimum.
72 T.O. Ting et al.
x.yang@mdx.ac.uk

7.
In the rest of this chapter, we will brieﬂy review the past, present and future of
the hybridization strategies used to promote solution diversity in combining algo-
rithms. Obviously, it is not possible to cover everything in a single chapter, and thus
we will focus on some key points concerning the latest developments.
2 Hybrid Algorithms
Hybrid algorithms are very diverse, and they form a long list of algorithms and
variants. Here, we only brieﬂy touch the tip of the algorithm iceberg.
2.1 The Past
Evolutionary algorithms (EAs) [7, 8] are stochastic global optimizers that mimic the
metaphor of biological evolution. They are almost always population-based algo-
rithms that learn from the past searches by using a group of individuals or agents.
These algorithms often usually possess behaviour inspired by social or biological
behaviour in the natural world. Loosely speaking, there are three categories of EAs,
which are:
(i) Evolutionary Programming (EP) [9]
(ii) Evolutionary Strategies (ES) [10], and
(iii) Genetic Algorithms (GAs) [11]
These algorithms were among the ﬁrst to offer great advantages in terms of
locating the global optimality in vast and complex search spaces, especially when
gradient information is unavailable. The implementation of these algorithms are
Fig. 1 Compromising
accuracy and convergence
rate
Hybrid Metaheuristic Algorithms: Past, Present, and Future 73
x.yang@mdx.ac.uk

8.
relatively straightforward and easy, based upon simple and easy to understand
concepts. They are also reasonably ﬂexible as parameters can be changed easily for
better performance.
There were many hybrid algorithms or variants about various evolutionary
algorithms. However, key issues such as slow convergence or premature conver-
gence still exist. In addition, these algorithms can be computationally extensive and
requires a lot of iterations for highly complex problems.
2.2 The Present
The present developments tend to provide some improvements based on the
extensive developments in last few decades, and researchers are still actively trying
to design new hybrid algorithms. For example, in [1], Rodriguez et al. developed
hybrid metaheuristic by integrating an EA with Simulated Annealing (SA). In their
review, they found that there were at about 312 publications indexed by ISI Web of
Science that utilized both EA and SA algorithms. In comparison, there were only
123 publications that hybridized EAs with other metaheuristics such as the greedy
search, iterated local search, descent gradient, and tabu search. However, Rodriguez
et al.’s survey was limited to EAs and SA methods.
In the current literature, hybrid algorithms seem widely developed. Using Par-
ticle Swarm Optimization (PSO) as an example [12], the combination of PSO with
other auxiliary search techniques seems very effective in improving its performance
[13]. Genetic algorithm hybrids (or use of genetic operators with other methods) are
by far the most widely studied. Genetic operators such as selection, crossover, and
mutation have been integrated into PSO to produce better candidates [14]. Differ-
ential evolution [15], ant colony optimization [16] and also conventional local
search techniques have been used to combine with PSO [17].
In addition, to avoid locating previously detected solutions, techniques such
as deﬂection, sketching, repulsion [18], self-organizing, and dissipative methods
[19] have also been used in the hybrid PSO algorithms. Some biology-inspired
operators such as niche [20] and speciﬁcation technique [21] are introduced into
PSO to prevent the swarm from crowding too closely and to locate as many good
solutions as possible. A cellular particle swarm optimization, in which a cellular
automata mechanism is integrated in the velocity update to modify the trajectories
of particles, is proposed in [22].
PSO is just one example, and other hybrid algorithms concerning differential
evolution and simulated annealing are also widely studied. The current trends seem
the hybridization of the conventional/well-established new algorithms. For exam-
ple, the new eagle strategy has been hybridized with differential evolution and
improved performance has been achieved [23].
74 T.O. Ting et al.
x.yang@mdx.ac.uk

9.
2.3 The Future
Many new algorithms have been developed in recent years. For example, the bio-
inspired algorithms such as Artiﬁcial Bee Colony Algorithm (ABC), Bat Algorithm
(BA), Cuckoo Search (CS), Fireﬂy Algorithm (FA), Flower Pollination Algorithm
(FPA), Glowworm Swarm Algorithm (GlowSA), Hunting Search Algorithm
(HSA), Eagle Strategy (ES), Roach Infestation Optimization (RIO), Gravitational
Search Algorithm (GravSA), Artiﬁcial Fish School Algorithm (AFS), Bacterial
Evolutionary Algorithm (BEA), Artiﬁcial Plant Optimization Algorithm (APO),
Krill Herd Algorithm (KHA) and others [24].
The list is expanding rapidly. These algorithms may possess entities and some
novel characteristics for hybridization that remain to be discovered in the near
future. However, it is worth pointing out that simple, random hybridization should
not be encouraged. In a keynote talk by Xin-She Yang at the 15th EU workshop in
metaheuristics and engineering (15th EU/ME) in Istanbul, Turkey in 2014, Yang
warned about the danger of random hybridization. Suppose there are n algorithms,
if one chooses 2 k n to produce a hybrid, there will be
Ck
n ¼ n
k
À Á
¼
n!
k!ðn À kÞ!
;
possible combinations. For n = 20 and k = 2, there will be 190 hybrids, while for
n = 20 and k = 10, there will be 184,756 hybrids, which might produce 184,756
random (and almost useless) hybrids. Considering other combinations of k = 2, 3,
…, 19, there will be about 2n
= 1,048,576 hybrids. This estimate comes from the
sum of binomial coefﬁcients
Xn
k¼0
n
k
À Á
¼ 2n
:
As a further joke here, for any four algorithms such as PSO, Grass Colony
Optimization, Donkey Algorithm, Blue Sky Algorithm (purely random names), any
sensible researchers should not produce PSO-Donkey-BlueSky Algorithm, or
Grass-PSO-Donkey algorithm, or BlueSky-Grass-Donkey Algorithm! No one
should ever do it (except a future robot hitchhiker from another Galaxy).
Research efforts should focus on truly novel, insightful and efﬁcient approaches.
Novel techniques and approaches should be based on mathematical theory and
insightful analysis of the main mechanisms in algorithms so that new hybrid
algorithms should truly provide more effective ways of problem-solving to large-
scale, real-world applications.
As it is really challenging to produce a good hybrid, we will try to summarize
some observations and developments concerning hybrid algorithms in a very
informal manner in the rest of this chapter.
Hybrid Metaheuristic Algorithms: Past, Present, and Future 75
x.yang@mdx.ac.uk

10.
3 Motivations for Hybridization
In a hybrid algorithm, two or more algorithms are collectively and cooperatively
solving a predeﬁned problem. In some hybrids, one algorithm may be incorporated
as a sub-algorithm to locating the optimal parameters for another algorithm, while
in other cases, different components of algorithms such mutation and crossover are
used to improve another algorithm in the hybrid structure. With regards to this
nature, hybrid algorithms can loosely be divided into two categories:
(i) Uniﬁed purpose hybrids. Under this category, all sub-algorithms are utilized to
solve the same problem directly; and different sub-algorithms are used in
different search stages. Hybrid metaheuristic algorithms with local search is a
typical example. The global search explores the search space, while the local
search is utilized to reﬁne the areas that may contain the global optimum.
(ii) Multiple purpose hybrids. One primary algorithm is utilized to solve the
problem, while the sub-algorithm is applied to tune the parameters for the
primary algorithm. For example, PSO can be applied to ﬁnd the optimal value
of mutation rate in GAs. Hereby, PSO is not solving the problem, but assisting
in ﬁnding better solutions by searching for the optimal parameter for better
performance. The hyper-heuristic algorithms can be regarded as a kind of
hybrid methods. In hyper-heuristic methods, parameters are selected (by a sub-
algorithm or via a learning mechanism) [25].
4 Taxonomy of Hybrid Algorithms
Generally speaking, hybrid algorithms can be grouped into two categories, which
are described in the following subsections.
4.1 Collaborative Hybrids
This involves the combination of two or more algorithms running either in
sequential or parallel. The contributing weight of each participating algorithm can
be regarded as half and half in the simplest case. The possible frameworks of the
hybrid algorithms under this category are illustrated in Fig. 2. Three structures are
depicted in this ﬁgure, which are:
(i) Multi-stage. There are two stages involved in this case. The ﬁrst algorithm acts
as the global optimizer whereas the second algorithm performs local search.
This category can ﬁt well into the framework of the eagle strategy described
by Yang in [26]. The ﬁrst algorithm is capable of exploring the search space
76 T.O. Ting et al.
x.yang@mdx.ac.uk

11.
globally to locate promising area of convergence. Then the second algorithm
will perform intensive local search such as hill-climbing and downhill simplex
method [27]. A challenging issue in such an implementation is to know when
to switch to the second algorithm. Measures such as diversity should be
incorporated to assist in the switch criteria. Previous works in [28, 29] utilized
Genetic Algorithm as the global optimizer (ﬁrst algorithm), with Particle
Swarm Optimization (PSO) as the local searcher (second algorithm).
(ii) Sequential. In this structure, both algorithms are run alternatively until one of
the convergence criteria is met. For simplicity, both algorithms will be run for
similar number of iterations before proceeding to the next algorithm.
(iii) Parallel. Two algorithms are run simultaneously, manipulating on the same
population. One of the algorithms may be executed on a pre-speciﬁed per-
centage of an algorithm.
4.2 Integrative Hybrids
In this aspect, one algorithm is regarded as a subordinate, embedded in a master
metaheuristic. For this category, the contributing weight of the secondary algorithm
is approximately 10–20 %. This involves incorporation of a manipulating operator
from a secondary algorithm into a primary algorithm. For example, many algo-
rithms utilized the mutation operator from GA into PSO, resulted in so called
Genetic PSO or Mutated PSO. Others may incorporate the gradient techniques such
as hill-climbing, steepest descent, and Newton-Raphson into the primary algorithm.
Examples include works published in [15, 19, 30].
Under this category, Fig. 3 illustrates the two possible approaches:
(i) Full manipulation. The entire population is manipulated at every iteration.
Such operation can be integrated inline with the existing source code, usually
as a subroutine/subfunction.
(ii) Partial manipulation. In this manipulation, only a portion of the entire
population is accelerated using local search methods such as gradient
methods. Choosing the right portion and the right candidate to be accelerated
pose a great challenge in assuring the success of this hybrid structure.
Fig. 2 Collaborative framework of hybrid algorithm, depicting multi-stage, sequential, and
parallel structures
Hybrid Metaheuristic Algorithms: Past, Present, and Future 77
x.yang@mdx.ac.uk

12.
5 Disadvantages and Challenges of Hybrid Algorithms
Although hybrid algorithms offers great advantage of increasing the diversity in a
population and hence enhancing the search capability of the developed hybrid algo-
rithm, some drawbacks do exist, which will be discussed in the following subsections.
5.1 Naming Convention
The inclusion of another algorithm usually leads to a naming issue. Some
researchers adopt very different names to their hybrid algorithms. For instance, the
GAAPI algorithm [6] is an acronym for Hybrid Ant Colony-Genetic Algorithm,
which is a bit confusing to other researchers. A hybrid name such as HPSO-BFGS
[31] seems to be a tedious abbreviation, which is harder to read. In comparison to
both architectures mentioned in Sect. 4, the collaborative type of hybrid algorithm
seems to create more sophisticated names. For example, it may be interesting to
compare the names of Hybrid GA-PSO (collaborative) to Mutated PSO (integra-
tive), though those two hybrids combined GA with PSO.
5.2 Complexity of Hybrid Algorithm
In terms of algorithm architecture, the hybridization process usually creates extra
components in the overall architecture of the hybrid algorithm. This increases the
Fig. 3 Integrative structure of a hybrid algorithm, with full and partial manipulations
78 T.O. Ting et al.
x.yang@mdx.ac.uk

13.
complexity of the hybrid algorithm. Due to a more complicated structure, hybrid
algorithms have some resistance to be accepted by researchers. In the literature, two
popular hybrid algorithms are Hybrid Taguchi-Genetic Algorithm [32] and Hybrid
Genetic Algorithm [33], with 305 and 294 citations (from Scopus [34]), both
published in 2004. From the citations, they seem to be received well. It is inter-
esting to note that both algorithms fall in the integrative type of hybrid algorithm,
which have simpler taxonomy/architecture.
5.3 Computational Speed
In many works, hybrid algorithms seem to improve results in terms of the overall
convergence speed and accuracy. However, these convergence graphs are often
plotted with respect to the number of iterations. This simply means that the faster
convergence does not mean the true convergence rate because the hybrid usually
uses a higher number of (internal or implicit) iterations. For example, for collab-
orative (sequential type) hybrid algorithm such as GA-PSO, a cycle, or one iteration
comprises GA and PSO. For a fair comparison, this should be considered as two
cycles instead of one in the convergence graph. To avoid this issue, the ﬁnal run
time should be utilized as a metric when comparing a hybrid algorithm with non-
hybrid algorithms. Besides, due to a more complicated architecture in hybrid
algorithms, the overhead arises alongside its complexity, often unavoidable. This
affects the overall performance and thereby truncates its robustness. The time
consumed by overheads should be taken into account for a fair comparison. Again,
this is possible by recording the true number of iterations taken to reach a pre-
speciﬁed target, though time complexity should be compared as well.
There are other issues concerning hybrid algorithms. For example, most hybrid
algorithms will increase the number of parameters in the algorithms, thus making it
harder to tune their parameters. I addition, the complicated structure of a hybrid
usually makes it harder to be analyzed, and thus gaining little insight into the
reasons why such hybrids work. Furthermore, hybrid algorithms are slightly harder
to implement, and thus more prone to errors. Thus, care should be taken when
interpreting results from hybrid algorithms.
6 Examples of Hybrid Algorithms
In this section, we focus on some recent hybrid algorithms that have been performed
on a wide range of numerical benchmark problems for global optimization. These
hybrid algorithms are grouped either under collaborative (Table 1) or integrative
(Table 2) categories. The prescription for both categories have been elaborated in
Sect. 4.
Hybrid Metaheuristic Algorithms: Past, Present, and Future 79
x.yang@mdx.ac.uk

14.
As discussed in Sect. 5.1, collaborative algorithms (in Table 1) tend to have
longer names compared to their integrative counterparts, as listed in Table 2. Also,
from our analysis above, the number of journals published for the latter category is
almost twice, compared to the ﬁrst category. This means that the integrative hybrids
are more widely accepted by researchers due to the simplicity in their development
process.
In reality, hybrid algorithms have been proven successful in solving a wide
range of applications. These include a myriad of different areas, such as neural
network [45], cluster analysis [33, 46], telecommunications [47, 48], scheduling
[49], protein structure analysis [50], image processing [51, 52], power system [28,
29, 53, 54] and many others.
7 Recommendations for Future Developments
From the above analysis and observations, we can highlight some insights that
should be useful to any future developments in this area as follows:
(i) Simpler algorithm are preferred than more complex algorithms. Einstein once
said: “Everything should be made as simple as possible, but not simpler.” In
the same sense, algorithms should be made as simple as possible. In reality,
Table 1 Recent collaborative hybrid algorithms, published after 2010
Abbreviation Full name
GAAPI Hybrid ant colony-genetic algorithm (GAAPI) [6]
HPSO-BFGS PSO-broyden-ﬂetcher-goldfarb-shanno [31]
PSO-ACO Particle swarm optimization-ant colony optimization [16]
DE-BBO Hybrid differential evolution-biogeography based optimization [35]
HABCDE Hybrid artiﬁcial bee colony-differential evolution [36]
Table 2 Recent integrative
hybrid algorithms, published
after 2010
Abbreviation Full name
CPSO Cellular particle swarm optimization [22]
TGA Taguchi genetic algorithm [32]
OGA Orthogonal genetic algorithm [37]
HABC Hybrid artiﬁcial bee colony [38]
HMA Hybrid memetic algorithm [39]
HCS Hybrid cuckoo search [40]
TC-ABC Taguchi chaos-artiﬁcial bee colony [41]
CLA-DE Cellular learning automata [42]
HPSO Hybrid particle swarm optimization [43]
HDE Hybrid differential evolution [44]
80 T.O. Ting et al.
x.yang@mdx.ac.uk