Within the last two decades, **optimization** **algorithms** with mathematical programing have proved to be effective in solving large complex **optimization** problems. Recently, **swarm** intelligence techniques have gained popularity because of their capacity to locate partially optimal solutions for combinatorial **optimization** problems [1, 2]. These techniques have been applied in various areas, such as economics, engineering, bioinformatics, and industry. These problems are better solved using **swarm** intelligence techniques because they are usually very hard to solve accurately due to the lack of any precise algorithm to solve them [1, 2]. The **swarm** intelligence **algorithms** mainly depend on updating the population of individuals by applying some operators according to the fitness information obtained from the environment. With these updates, the individuals in a population are expected to move towards an optimum solution.

Show more
In this paper, two types of discrete **particle** **swarm** **optimization** (DPSO) **algorithms** are presented to solve the Permutation Flow Shop Scheduling Problem (PFSP). We used criteria to minimize total earliness and total tardiness. The main contribution of this **study** is a new position update method is developed based on the discrete domain because PFSP is represented as discrete job permutations. In addition, this article also comes with a simple case **study** to ensure that both the proposed algorithm can solve the problem well in the short computational time. The result of Hybrid Discrete **Particle** **Swarm** **Optimization** (HDPSO) has a better performance than the Modified **Particle** **Swarm** **Optimization** (MPSO). The HDPSO produced the optimal solution. However, it has a slightly longer computation time. Besides the population size and maximum iteration have impact on the quality of solutions produced by HDPSO and MPSO **algorithms**.

Show more
12 Read more

In this paper, two population based meta-heuristic approaches are developed. Both approaches are introduced and then important factors which needs to selected in a proper manner for quality solution and faster convergence. These parameters are so important which can increase or decrease the computational time without affecting or affecting the qulity of solution. The paper is conclusive **study** of two **algorithms** by keeping the important parameters into consideration. It is clear from the above discussion that PSO has limited applicability because of trapping in local minima which can be avoided by using in combination with PSO. Firefly does not face any problem regarding local minima because of sufficient randomness. PSO and FA are applied on continuous variables. Firefly is well developed for continuous variables and still in progress for discrete variables.

Show more
The general PSO algorithm does not offer many ways, besides the fitness function, to adjust its clustering characteristics. However, one way to add diversity without adding more particles would be to alter the constants c1 and c2 as the PSO is running. In the early stages of PSO the c1 constant could be set larger than c2. As the PSO progresses, c2 could become increasingly larger. This process in effect would mimic the properties of the simulated annealing algorithm, which replicates the heating and controlled cooling used in metallurgy to reduce defects in metal. Therefore, the cognitive component of the **particle** would have more freedom to explore its personal best solution which should help prevent premature convergence. Adding this feature to PSO Method 1 could help it better perform with the fuzzy clustering **algorithms** that did not usually **segment** the images well themselves.

Show more
14 Read more

In the present **study** an attempt is made to review the one main algorithm is a well known meta-heuristic; **Particle** **Swarm** **Optimization** (PSO). PSO, in its present form, has been in existence for roughly a decade, a relatively short time compared with some of the other natural computing paradigms such as artificial neural networks and evolutionary computation. However, in that short period, PSO has gained widespread appeal amongst researchers and has been shown to offer good performance in a variety of application domains, with potential for hybridization and specialization, and demonstration of some interesting emergent behavior. In PSO, first we initialize all particles as shown below. Two variables pbest i and gbest are maintained. pbest i is the best fitness value

Show more
Over the past decade, the PSO algorithm has gained wide-spread popularity in the research community mainly because of its simplistic implementation, reported success on benchmark test problems, and acceptable performance on various application problems. PSO algorithm has been successfully used in multiobjective **optimization** [2] and constrained **optimization** [3], etc. Unfortunately, when solving complex multimodal tasks, the conventional PSO algorithm can easily fly into the local optima and lack the ability to jump out of the local optima. These shortcomings have imposed the restrictions on the wider applications of the PSO to real-world **optimization** problems. Therefore, the abilities of good convergence speed and avoiding local optima are two important and appealing tasks in the **study** of PSO. A number of variants of PSO **algorithms** have been developed in order to achieve these two goals, see for example [3-24].

Show more
19 Read more

The four states are Exploration, Exploitation, Convergence and Jumping-out states. Some limitations of APSO have been highlighted and thus some modifications have been made to develop an enhanced version. This version is named as Switching PSO (SPSO). Here the first state is considered as Convergence, second Exploration, third Exploitation and fourth as Jumping-out state. A new mode-dependent switching parameter is introduced here. A suitable value for acceleration coefficients is assigned according to the current state of the **particle**. A smaller value is assigned in convergence state, while a larger value in jumping state. The weight of inertia ω is also dependent on evolutionary factor and current state of the **particle**. An adaptive value of ω is assigned in each iteration according to the current state of the **particle**. The SPSO has been tested on several mathematical benchmark functions and has also been applied to the genetic regulatory networks (GRN) for parameter identification. Recently in [78] SPSO has been added the concept of Wavelet Neural Network (WNN) and then applied parameter **optimization** in face direction recognition. According to [136] SPSO has been modified and applied in the hybrid manner by introducing differential evolution (DE) and the new algorithm is then applied to lateral flow immunoassay analysis. In [9] the SPSO has been used with hybrid EKF for parameters estimation in lateral flow immunoassay. Consequent to the extensive search for the applications SPSO, we have found very limited number of applications. The idea of Markovian jumping and switching mechanism has widely been used in other techniques. After having the extensive search and **study** of the literature, we have noticed that SPSO has never been applied in power systems.

Show more
165 Read more

Shi and Eberhart (1998) proposed a parameter called inertia weight to balance the exploration and exploitation capabilities of PSO **swarm**. Various strategies have been developed to tune the parameter since then. In their earlier work, Shi and Eberhart (1998) suggested that the parameter with a fixed value lying between 0.8 and 1.2 is able to achieve a good convergence behavior of **swarm**. Later, a time-varying scheme that linearly decreases with the iteration number was introduced by Shi and Eberhart (1999). Accordingly, the value of is initially set to a larger value (i.e., = 0.9) to allow the particles explore the search spaces in the early stage of **optimization**. Once the optimal region is located, is gradually decreased to 0.4 to refine the optimal search area in the latter stage of **optimization**. Chatterjee and Siarry (2006), and Cai et al. (2008), on the other hand, proposed to vary the in nonlinear manner. As compared to the linear variation approach, the nonlinear variation of enables the **particle** **swarm** to explore the search space in more aggressively manner during the early stage of **optimization**, in order to locate the optimal region with faster rates. Clerc and Kennedy (2002) performed thorough theoretical studies on the PSO convergence properties and subsequently proposed a similar parameter known as the constriction factor . Accordingly, the parameter could prevent the **swarm** explosion by providing the damping effect on the particle’s trajectory. Experimental **study** revealed that the parameters and are algebraically equivalent when the condition of

Show more
51 Read more

Hybrid **algorithms** are quite successful since they combine both **algorithms**’ powerful sides. Since PSO is quite fast algorithm, nearly all newly developed algorithm combined with PSO. Some of the recent studies using **swarm** intelligence are crow search algorithm (CSA) [27], ant lion optimizer (ALO) [28], the whale **optimization** algorithm (WOA) [29], grey wolf optimizer (GWO) [30], monarch butterfly **optimization** (MBO) [31], moth flame **optimization** [32], selfish herd **optimization** (SHO) [33], and salp **swarm** **optimization** (SSO) [34]. Since the **algorithms** stated in the following paragraph are quite new, according to the Web of Science records, there is not any hybrid **study** combining PSO with them.

Show more
The **study** has shown that we can deploy transmitter antenna position by PSO and APSO to improve wireless communication system performance in real environment. In this paper, the CF is de ﬁned as the inverse of channel capacity for WLAN system. The PSO and APSO minimise the CF by adjusting the transmitter antenna location. Channel propagation environments are simulated the propagation site. To obtain optimal channel capacity for a given transmitter, PSO and APSO **algorithms** are used to determine the best location of the transmitter. This paper proposes the use of a 3D ray-tracing model in indoor wireless systems, combining PSO and APSO for optimising the transmitter antenna location. It is shown that the combination of the ray-tracing method and the algorithm can lead to optimised channel capacity in indoor environment. By using the frequency responses of these 3 × 3 MIMO channels, the channel capacity performance for Shannon –Hartley theorem WLAN communication system is calculated. Channel capacity is the average performance criteria for digital transmission systems. The PSO and APSO is used to maximise the channel capacity. The frequency dependence of materials utilised in the structure of the indoor channel is accounted for in the channel simulation, that is, the dielectric constant and loss tangent of obstacles are not assumed to be frequency independent. Numerical results show that the APSO outperforms the PSO in convergence speed. As a result, the channel capacity can be increased by about 24.7% in indoor WLAN communication systems. It is also found that the channel capacity as increased by APSO is better than that by PSO.

Show more
Abstract. Adaptation to dynamic **optimization** problems is currently receiving a growing interest as one of the most important applications of evolutionary **algorithms**. In this paper, a compound **particle** **swarm** **optimization** (CPSO) is proposed as a new variant of **particle** **swarm** op- timization to enhance its performance in dynamic environments. Within CPSO, compound particles are constructed as a novel type of particles in the search space and their motions are integrated into the **swarm**. A special reflection scheme is introduced in order to explore the search space more comprehensively. Furthermore, some information preserving and anti-convergence strategies are also developed to improve the per- formance of CPSO in a new environment. An experimental **study** shows the efficiency of CPSO in dynamic environments.

Show more
10 Read more

Particularly interesting group consists of **algorithms** that implement co-evolution or co-operation in natural environments, giving much more powerful implemen- tations. The main aim is to obtain the algorithm which operation is not influ- enced by the environment. An unusual look at **optimization** **algorithms** made it possible to develop a new algorithm and its metaphors define for two groups of **algorithms**. These studies concern the **particle** **swarm** **optimization** algorithm as a model of predator and prey. New properties of the algorithm resulting from the co-operation mechanism that determines the operation of algorithm and significantly reduces environmental influence have been shown. Definitions of functions of behaviour scenarios give new feature of the algorithm. This feature allows self controlling the **optimization** process. This approach can be success- fully used in computer games. Properties of the new algorithm make it worth of interest, practical application and further research on its development. This **study** can be also an inspiration to search other solutions that implementing co-operation or co-evolution.

Show more
39 Read more

In PSO, a solution is encoded as a finite-length string called a **particle**. All of the particles have ﬁtness values which are evaluated by the ﬁtness function to be optimized, and have velocities which direct the ﬂying of the particles [Parsopoulos et. al. 2001]. PSO is initialized with a population of random particles with random positions and velocities inside the problem space, and then searches for optima by updating generations. It combines the local and global search resulting in high search efficiency. Each **particle** moves towards its best previous position and towards the best **particle** in the whole **swarm** in every iteration. The former is a local best and its value is called pBest, and the latter is a global best and its value is called gBest in the literature. After ﬁnding the two best values, the **particle** updates its velocity and position with the following equation in continuous PSO:

Show more
13 Read more

Abstract —Social learning in **particle** **swarm** optimiza- tion (PSO) helps collective efficiency, whereas individual repro- duction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mech- anistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another **optimization** technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cas- cading layers, the first for exemplar generation and the second for **particle** updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a spe- cific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic opera- tors are used to generate exemplars from which particles learn and, in turn, historical search information of particles pro- vides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical informa- tion of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO.

Show more
15 Read more

The original PSO algorithm which presented by Eberhard and Kennedy [1] is a powerful algorithm to figure out optimal solution. PSO algorithm imitates the behavior of animal herd to approach to the optimal solution. The basic unit of **swarm** is **particle**, which represents a candidate solution of a particular application. Each **particle** moves around a specified area to search the optimal solution by adjusting their velocity and position. The velocity and position of each **particle** is influenced by two element, which including personal best solution and global best solution. The personal best solution and global best solution is updated according to the fitness value, which defined by the specified application. After several hundred or more times iteration, particles get close to optimal solution, which is denoted by global best solution. The equation of PSO is shown as Eq. (1) and Eq. (2):

Show more
Abstract—A modified **particle** **swarm** **optimization** algorithm is proposed in this paper. In the presented algorithm, every **particle** chooses its inertial factor according to the approaching degree between the fitness of itself and the optimal **particle**. Simultaneously, a random number is introduced into the algorithm in order to jump out from local optimum and a minimum factor is used to avoid premature convergence. Five well-known functions are chosen to test the performance of the suggested algorithm and the influence of the parameter on performance. Simulation results show that the modified algorithm has advantage of global convergence property and can effectively alleviate the problem of premature convergence. At the same time, the experimental results also show that the suggested algorithm is greatly superior to PSO and APSO in terms of robustness.

Show more
In this paper a method to optimize the structure of neural network named as Adaptive **Particle** **Swarm** **Optimization** (PSO) has been proposed. In this method nested PSO has been used. Each **particle** in outer PSO is used for different network construction. The particles update themselves in each iteration by following the global best and personal best performances. The inner PSO is used for training the networks and evaluate the performance of the networks. The effectiveness of this method is tested on many benchmark datasets to find out their optimum structure and the results are compared with other population based methods and finally it is implemented on classification using neural network in data mining.

Show more
continuous non-linear functions. The **Particle** **Swarm** **Optimization** algorithm is similar to many population based **algorithms** such as Genetic Algorithm but they don’t have any direct re-combination of individuals of the population. It has become popular due to its simplicity and effectiveness in wide range of applications along with its low computational costs. Like all other evolutionary **algorithms**, **Particle** **Swarm** **Optimization** (PSO) is appropriate for the problems with immense search spaces that present many local minima. [3]

Show more
Various researches have been carried out to improve the efficiency of K-Means algorithm with **Particle** **Swarm** **Optimization**. **Particle** **Swarm** **Optimization** gives the optimal initial seed and using this best seed K-Means algorithm produces better clusters and produces much accurate results than traditional K-Means algorithm. A. M. Fahim et al. [5] proposed an enhanced method for assigning data points to the suitable clusters. In the original K-Means algorithm in each iteration the distance is calculated between each data element to all centroids and the required computational time of this algorithm is depends on the number of data elements, number of clusters and number of iterations, so it is computationally expensive.

Show more
Note to Practitioners: Abstract—This paper was inspired by the source seeking problem when the signal source is very noisy and non-smooth. We focus our work particularly on electromagnetic source, but the strategies proposed in this paper can be applied to other types of sources. Most existing strategies approach this problem with gradient-based methods using one or more mobile agents. These methods either assume the signal profiles to be smooth or use complicated and computationally costly procedures to obtain accurate estimation of gradient information. This paper suggests approaching the problem using a heuristic method which is simple to implement on robots with limited computation capability. We propose some modifications to a population based **optimization** technique **Particle** **Swarm** **Optimization** (PSO), so as to adapt it a real-world scenario where a group of mobile agents trying to find an unknown electromagnetic source. The mobile agents know their own positions and can measure the signal strength at their current positions. They can share information and plan for the next step based on individual and group memory. We then propose a complete solution to ensure the effectiveness of PSO in complex environments where collisions may occur. We incorporate static and dynamic obstacle avoidance strategies in PSO to make it fully applicable to real-world scenario. In the end, we validate the proposed method in experiments. In our future work, we will work on improving the efficiency of the method.

Show more
16 Read more