2.3 Particle **Swarm** **Optimisation** 48
Srinivasan and Seow (2003) Srinivasan and Seow [111] created a hybrid algorithm combining concepts of PSO and an evolutionary algorithm called PS-EA. The main mechanism for this method is called SUM (Self-Updating Mechanism), which uses a Probability Inheritance Tree (PIT). The SUM is inspired by PSO concepts, and works as an emulator of the different positions that a particle could adopt when **using** PSO equations to update its current position. The mechanism will update the current position of a particle to the next, according to the result that lies at the bottom of the probability inheritance tree. The results lying at the bottom of the branches, which are used to update particle’s positions, were obtained by analyzing the orig- inal PSO formula. The branch to choose whitin the tree will depend on the probabilities given by a dynamic inheritance probabilistic adjuster, in- troduced in the same study. Although, the study is more focused on showing the performance comparison between a genetic algorithm and PSO schemes, in **multi**-modal single-**objective** **optimisation**; they present a **multi**-**objective** case, which shows the applicability of this heuristic to solve MOPs.

Show more
252 Read more

The reciprocal altruism characteristic has further been incorporated into MABSA to strengthen the procedure of colony searching for the best solution. This reciprocal altruism behaviour widely runs through a colony of bats as reported by many researchers in bats ecology [16, 24-25]. By inserting this behaviour into the algorithm, a member of the colony will disseminate and **sharing** the location of the best fitness found so far to other bats. As a result, all bats will fly to the best prey ever found when the search process comes to an end. The adoption of this real prey hunting behaviour of the colony of bats into the algorithm is symbolised by two levels of arithmetic mean.

Show more
12 Read more

modelling simulation problems at the initial phase of the modelling effort.
Valluri & Croson (2005) uses agent-based modelling to study the performance
of a supplier selection model.
An interesting application of a **multi**-agent system in supply chains is high- lighted in the work of Pan et al. (2009) on reorder decision-making in the apparel supply chain. In their model they make use of an inventory manager agent who is responsible for controlling inventory and making decisions about reorder strategies and price setting. A client agent collects sales information from their market, forecasts the future customer demand and provides feedback to the in- ventory manager. The authors apply fuzzy knowledge to determine reorder points by taking into consideration the market changes and fashion trends. A genetic algorithm is applied to forecast the reorder volume with the aim of minimising the total cost in the supply chain. The model considers fashion trends, seasonal distribution, sales records, and point of sales data to adapt to the changing mar- ket. An important contribution of their work is proving how information **sharing** between entities in the supply chain can be used to optimise reorder strategies.

Show more
108 Read more

Abido (Abido 2003a, b, c, 2006, 2007, 2009) pioneered research into applying intelligent **Multi**-**Objective** search techniques to the EEDP. He optimised the standard IEEE 30-bus system **using** the Non-dominated Sorting Genetic Algorithm-II (N SG A -II) (Ah King et al. 2005, 2006), Strength Pareto Evolutionary Algorithm (SPEA) (Abido 2003a), and **Multi**-**Objective** Particle **Swarm** **Optimisation** (MOPSO) (Abido 2007, 2009). All the above meta heuristic techniques successfully address the limitations o f classical approaches because they allow concurrent explorations o f different points of the Pareto front, and also generate multiple solutions in a single run. Their main drawback is performance degradation as the number o f objectives increases, since there are no computationally efficient methods to perform Pareto ranking. Furthermore, additional parameters (such as the ‘**sharing** factor’ and the number of Pareto samples) need to be introduced and tailored to suit (Lee and El-Sharkawi 2008).

Show more
313 Read more

Abstract This chapter examines the performance characteristics of both asyn- chronous and synchronous parallel particle **swarm** **optimisation** **algorithms** in het- erogeneous, fault-prone environments. The chapter starts with a simple parallelisa- tion paradigm, the Master-Slave model **using** **Multi**-**Objective** Particle **Swarm** Opti- misation (MOPSO) in a heterogeneous environment. Extending the investigation to general, distributed environments, algorithm convergence is measured as a function of both iterations completed and time elapsed. Asynchronous particle updates are shown to perform comparably to synchronous updates in fault-free environments. When faults are introduced, the synchronous update method is shown to suffer sig- nificant performance drops, suggesting that at least partly asynchronous **algorithms** should be used in real-world environments. Finally, the issue of how to utilise newly available nodes, as well as the loss of existing nodes, is considered and two methods of generating new particles during algorithm execution are investigated.

Show more
28 Read more

When MDO is applied to the external geometry of an aircraft, several analysis tools are required to accurately model the environment and response. The main tools used are Computational Fluid Dynamics (CFD) to model the airflow about the vehicle, Finite Element Analysis (FEA) to model the structural response and the **optimisation** tool itself. As modern computing power has increased, so the fidelity and user confidence in the different software packages has increased to a point where now these tools can accurately be used in conjunction with MDO such as in the work by Mason [3], Argarwal [4] and Thomas [5]. An industrial fidelity solution still requires too much computational power to effectively be incorporated in an MDO framework. A full three-dimensional Navier-Stokes flow solution about a high performance wing may take numerous hours to solve. If the optimiser were to perform many hundred such solutions, the total time taken to optimise the wing could extend to weeks. Many methods have been proposed to minimise this computational expense such as Design of Experiments (DOE) by Giunta [6] or approximation and variable fidelity models by Coello [7], Deb [8] and Kim [9].

Show more
17 Read more

(REP-T) [4]; na¨ıve Bayes classifier (NBC) [5]; and support vector machine (SVM) [6]. We chose the state-of-the-art implementations of these **algorithms** from the Weka tool [21]. Each of these chosen **algorithms** differs in their nature. Therefore, we expect them to perform differently over a diverse range of the dataset. Hence, we look for the average performance of MONT over a set of datasets in comparisons to the average performance of the chosen **algorithms**, as well as; we look for the comparisons between three different versions of neural tree (MONT) training processes and settings. These versions are: single **objective** GP based training, MONT 1 ,

Show more
10 Read more

This study investigated the application of three state-of-the-art **multi**-**objective** evolutionary **algorithms**, namely NSGA-II, SPEA2 and IBEA for solving a **multi**-**objective** wind farm layout **optimisation** problem. The objectives include maximisation of the annual energy production, min- imisation of the cable length and minimising the land area. We then tried to solve this com- putationally difficult problem more effectively by mixing these three **multi**-**objective** evolutionary **algorithms** **using** nine different selection hyper-heuristics. The empirical results indicated that selec- tion hyper-heuristics could indeed exploit the strengths of multiple **multi**-**objective** metaheuristics, and thus achieve statistically significant better performance than each constituent **multi**-**objective** metaheuristic used on its own with respect to different performance indicators, including hyper- volume and uniform distribution. This phenomenon has been verified by further analysing the convergence and diversification properties of selection hyper-heuristics and the three underlying metaheuristics based on their resultant Pareto fronts.

Show more
25 Read more

force towards the global Pareto-optimal front once most of the solutions of the population share the same non-dominated level, and (2) the proposed symbolic **algorithms** use relaxation of Karush-Kuhn-Tucker conditions which guarantee Pareto-optimal solutions. These optimality conditions help finding out the relationship between the design variables for solutions on the Pareto-optimal front and therefore a complete curve/surface of the corresponding Pareto-optimal front can be formed. It is also observed that in most test problems the symbolic **algorithms** provide a mathematical formula for the Pareto-optimal front. With this mathematical formula of the Pareto-optimal front, better distribution of solutions can be obtained as compared to NSGA- II. As reported in literature, NSGA-II or any other stochastic **optimisation** algorithm still faces difficulties in providing uniform distribution on the Pareto-optimal front. The reason for this is that the crowded comparison operators used in these **algorithms** to attain solution diversity use external means without addressing the inherent characteristics that lead to diversity problems. The proposed symbolic **algorithms** address the core issue of this problem by determining the relationship(s) among the design variables of the Pareto-optimal solutions, and then **using** it to generate well distributed solutions over the Pareto-optimal front. These **algorithms** are able to identify variable interactions, and hence converge to the exact Pareto-optimal front. Furthermore, it is observed that since these symbolic **algorithms** use first and second-order optimality conditions as their **optimisation** engine, they have all the characteristics for effectively deal with inseparable function interactions in **multi**-**objective** **optimisation** problems.

Show more
188 Read more

are always subject to dynamic environments, where the problem’s **objective** function, decision variables, and/or constraints may change over time. For these DOPs, an optimization algorithm is always requested to track the changing optimum in the solution space as closely as possible, which is difficult to achieve for traditional EAs due to the convergence problem: once a population is converged, it cannot re-locate new optimal solutions quickly when an environmental change occurs. It is also noticeable that DOPs are often **multi**-modal, i.e., there exist multiple peaks in the fitness landscape. Obviously, these dynamic **multi**-modal optimization problems (DMMOPs) present more serious challenges to EAs since the goal would be to track as many as possible moving optima simultaneously, rather than only a single optimum. In order to solve DMMOPs efficiently, researchers have developed a number of approaches into EAs, such as niches techniques, speciation methods, and **multi**-population strategies. These approaches mostly focus on how to distribute the individuals of the population into different search areas in the solution space in order to enable the algorithm to locate multiple optima. They seldom concern how to achieve these optima with a faster speed and higher accuracy. However, MAs own a distinct advantage: they can obtain very high-quality solutions quickly due to the introduction of the individual level learning mechanism. Therefore, it becomes a very interesting research topic to investigate dedicated memetic computing approaches for solving DMMOPs.

Show more
20 Read more

In this thesis the problem of interest is the **optimisation** of a linear function over the non-dominated set of an MOP. In Chapter 4 we present two new **algorithms** for maximising a linear function over the non-dominated set of an MOLP, which is named problem (P). A primal method is developed based on a revised version of Benson’s outer approximation algorithm to compute the non-dominated set of an MOLP. Benson’s algorithm enumerates all of the non-dominated vertices of the **objective** polyhedron (the image of the feasible set in decision space). We first show that an optimal solution to (P) exists at a vertex of the **objective** polyhedron. Therefore, a naïve algorithm can be proposed with two phases. Firstly enumerate all non-dominated vertices through Benson’s algorithm, and then evaluate the **objective** function at the vertices. The primal method we propose integrates the two phases. Specifically, the vertex evaluation step is embedded into the vertex enumeration step. When a vertex is generated in the iterations of Benson’s algorithm, the **objective** function is evaluated at the vertex. If the vertex is a non-dominated point of (MOLP), we use a hyperplane serving as a “cut” to remove part of the search region where no better point exists.

Show more
96 Read more

to the decision maker’s needs. In the latter approach the **multi**-**objective** formulation of the problem is maintained and the aim is to ﬁnd all or some of the Pareto optimal solutions. The main drawbacks of the a priori methods is the diﬃculty of ﬁnding proper weights to satisfy the decision makers’ preferences, and their inability to solve certain classes of problems. In contrast, maintaining the **multi**-**objective** formulation of the problem allows the exploration of its behaviour across a range of design parameters and operating conditions. Despite this substantial advantage, a posteriori methods require speciﬁc operators and **algorithms** to handle conﬂicting objectives, which are generally computationally expensive.

Show more
10 Read more

A topic studied in the thesis was the effectiveness of the optimiser and possible im- provements of the loop. The engines natively present in Matlab R2015 b are general purposes tools, which are not thought specifically for engineering problems, though widely used for this purpose. In the single-**objective** optimisations solved, the conver- gence was more difficult when employing the CFD model. The global search capability of the algorithm was evident, since it led towards the same point in all the several runs performed, but a local refinement of the solution required quite an effort. A test with the Particle **Swarm** algorithm in combination with the Spalart-Allmaras model revealed a faster convergence and the use of another evolutionary paradigm is interesting to be studied. The NSGA-II for the **multi**-**objective** optimisations showed a certain difficult in individuating the dominant solutions particularly for the case study of the **multi**- element high lift device **optimisation**. Also for this purpose more advanced tools are available, specifically intended to attain the best compromise between exploration and exploitation in a controlled way and the application of another engine to the problem solved deserves to be further examined. The use of meta-modelling was also enquired in the study, as a possible improvement to the framework. For the approximation of the aerodynamic performance computed by XFOIL and the Transition SST model an arti- ficial neural network proved to be suitable and led to a solution close to that originally found without any surrogate model. In order to promote an easier refinement capability of the loop, a surrogate model assisted **optimisation** was performed, in which a gradi- ent base solver exploited the surrogate model for a local refinement of each individual at each GA generation. This procedure was able to generate a solution close to that originally found with an exact fitness evaluation, but in a overall lower computational effort.

Show more
155 Read more

A population (**swarm**) of individual solutions is main- tained in PSO, whose representation is typically a vector of floating point decision parameters, which are used in a solution’s (particle’s) evaluation. During the **optimisation** process of PSO (following initialisation), members of this population are flown (have their parameters adjusted) ac- cording to their previous ‘flying experience’. This flying ex- perience is both in terms of the particle as an individual, and as a member of a wider group (the entire **swarm**, or a subset of it). The general PSO model implements this by adjusting an individual’s decision parameters to make them ‘closer’ to the decision parameters of two other solutions; a neigh- bourhood guide (which may be global or local), and the best evaluated position found previously by that individual. A particle’s position also includes some temporal adjustment via a velocity vector, which tracks the movement the parti- cle made in the previous iteration of the optimiser, and uses this to adjust the particle’s position in the current iteration. A decade ago (circa 2002), researchers began publishing **multi**-**objective** (MO) variants of PSO [2, 11, 13, 18] (al- though an unpublished paper on the area exists from 1999 [15]), typically referred to as MOPSO **algorithms**. Since these works there has been a large growth in the number and range of MOPSO **algorithms** published in the litera- ture, which has largely tracked the growth of, and range of, general **multi**-**objective** evolutionary **algorithms** (MOEAs), with comparison/selection/variation operators popularised in the MOEA field rapidly being converted into aspects of MOPSOs when direct analogies could be drawn (e.g. the use of dominance, hypervolume indicator, clustering, archive maintenance, mutation/turbulence operators, etc.). As the number of distinct MOPSOs has grown, a number of papers have provided overviews of the range of approaches that can be taken, along with some empirical comparisons (e.g. [10, 16, 17, 19]). However there has been relatively little work thus far examining many-**objective** PSO performance (i.e., on problems with four or more objectives) [21, 5].

Show more
A novel set-based **multi**-**objective** simulated annealer (SAMOSA) which stores a non-dominated set of solutions as the state was introduced. While simulated annealing requires a current state, which ultimately approximates the desired optimal solution, it is not clear whether this state should represent a single solution (as with other **multi**-**objective** simulated annealing techniques, including MOSA) or a set of solutions; as the desired result of the **optimisation** is a set of solutions, maintaining a set of solutions as the state is appealing (and this approach is used in many other **optimisation** techniques such as genetic **algorithms** and many evolution strategies). Comparisons between SAMOSA and MOSA showed two traits: Set-based state slowed convergence to the true Pareto front as measured by the distance of solutions from the true front; but also that, for problems with highly non-linear mappings from the region representing the true front in decision variable space to the front in **objective** space such as DTLZ4, methods such as Uniselect can be used with set-based annealers resulting in a considerably more uniform distribution of solutions across the true front. The slower convergence can be explained as being inherent to set-based methods of this nature, although profoundly different set-based approaches may not have this characteristic.

Show more
25 Read more

This paper deals with the first of the failure modes, namely parametric rolling (PR). As a ship vulnerable to PR has to be re- designed, there will be constraints and objectives such as minimizing the impact to resistance and if possible even reduce it. Genetic **algorithms** have been widely used in **multi**-**objective** **optimisation** due to its population-based nature which allows the several element generation of the Pareto optimal set with a single run (Coello 2002). Here, a **multi**-**objective** evolutionary algorithm is proposed as a solution to the **multi**-**objective** **optimisation** problem in integrated parametric rolling and resistance (IPRR) design process. The objectives of IPRR process are optimised by a **multi**-**objective** genetic algorithm “Nondominated sorting genetic algorithm II (NSGA II).

Show more
12 Read more

As the goal in **multi**-**objective** **optimisation** based on a metaheuristic is to obtain a set of trade-off solutions at the end of the search process for the decision makers, population based search techniques (which use multiple solutions during the search), in particular **multi**-**objective** evolutionary **algorithms** (MOEAs) are naturally preferred. A variety of MOEAs with differing algorithmic components, such as diversity maintenance, replacement, have been previously proposed and more can be found in Zitzler & Thiele (1999); Zitzler et al. (2000); Konak et al. (2006). This work considers three MOEAs: Non-dominated Sorting Genetic Algorithm II (NSGA-II) (Deb et al., 2002), Strength Pareto Evolutionary Algo- rithm 2 (SPEA2) (Zitzler et al., 2002), Indicator-Based Evolutionary Algorithm (IBEA) (Zitzler & K¨ unzli, 2004).

Show more
38 Read more

In 2003 Farmani et al. [86] compared four MOEAs for WDSDO and concluded that NSGA-II [61] was the best. In 2002 Wu and Simpson [263] investigated a self-adaptive penalty function to pressurize the **optimisation** search towards the region at the boundary of feasibility where optimal solutions are typically located, thereby boosting performance. This self-adaptive tech- nique was later incorporated into a **multi**-**objective** version of the fmGA by Wu and Walski [266] in 2004, and applied to the **optimisation** of the Hanoi network. In 2004 Nicolini [182] compared three MOEAs (ENGA, NSGA-II, and the controlled elitist NSGA-II [60]) towards the design of the two-loop network introduced by Alperovits and Shamir [9], ﬁnding that the latter two outperformed the ENGA. Prasad and Park [194] employed the NSGA in 2004 for MOO **using** objectives of cost and a novel surrogate measure of reliability called Network Resilience, de- signed to reward reliable loops in the network explicitly. They found that this produced more robust designs than previous methods. In 2005 Farmani et al. [88] compared the NSGA-II to the Strength Pareto Evolutionary Algorithm 2 (SPEA-II) with respect to three WDS bench- marks, and found that the latter produced improved solution quality (at the cost of increased running time). They applied these **algorithms** to the large Exeter WDS benchmark [84] with three objectives, and concluded that while both **algorithms** were somewhat successful, further research was needed in locating better Pareto-optimal sets, particularly in high-dimensional **objective** spaces. In another study [87], they applied the NSGA-II to the **multi**-**objective** de- sign of the Anytown WDS [246], which includes the design and placement of tanks, **using** the Resilience Index as an **objective**. In 2005 Kapelan et al. [141] implemented an adapted robust version of the NSGA-II algorithm (RNSGA-II) which uses reduced sampling fitness evaluation (requiring fewer hydraulic simulations) to solve the stochastic WDS design problem with the objectives of minimizing cost and maximizing probabilistic hydraulic reliability. They employed this approach to solve the famous NYTUN problem [212] in a **multi**-**objective** fashion.

Show more
370 Read more

be advantageous when linked to existing **optimisation** methods (Forrester & Keane, 2009). In our case, the surrogate model is built from design points (input-output data points) calculated from the VPSA simulator (Friedrich et al., 2013). Kriging based **algorithms** have been widely used for single-**objective** **optimisation** but there are other issues involved in the **multi**-**objective** setting (S´ obester et al., 2014; Voutchkov & Keane, 2010). We propose a novel transformation of the purity and recovery outputs in the surrogate model training

30 Read more

The two EAs were each executed 30 times on each test problem, and the resultant non-dominated solutions saved at the end of each run. In the case of E-SPEA these were simply those individuals residing in F at the end of run, whereas an off-line store of the non-dominated solutions discovered by SPEA was kept. For both **algorithms** the | B t | = 80. In SPEA F t was limited to 20 individuals (the same number as used

12 Read more