5. When comparing these observations with the results of tests on the Griewank 20 function, a possible pat- tern emerges which explains the observed exception. Similar to the Griewank 10 function, the results on the Griewank 20 function are not distinctly separated by particlefield population sizes. However, the ob- served relation between the average final value for each group is, in fact, the opposite of the observed relation on the Griewank 10 function. In this case, the PFO configurations with a smaller population size performed, on average, slightly better than those with a larger population size. With the knowledge that the Griewank function becomes “easier” as the number of dimensions are increased , and recalling the results of tests on the Sphere function, a possible explanation for the results can be made. As the num- ber of dimensions increases, the Griewank function becomes “easier”, and so as the number of dimensions increases, “exploitative” behavior becomes more valuable than “exploratory” behavior. With a correlation between larger population sizes being much more “exploratory”, and smaller population sizes being much more “exploitative”, the difference in performance between the two on the Griewank function as the dimen- sion increases would, intuitively, shrink, until the “exploitative” behavior of small populations begins to yield better results than the “exploratory” behavior of large populations. This is in line with the observed results on the Griewank function. Results on the Griewank 2 function show “exploratory” large popula- tions performing much better than “exploitative” small populations. Results on the Griewank 10 function show the performance difference approaching zero. Finally, results on the Griewank 20 function show the “exploitative” small populations out-performing the “exploratory” large populations by a slight margin. 6. The weighting scheme was expected to have a large impact on the PFO algorithm’s behavior. The test
The first practical application of PSO was in the field of neural network training and was reported together with the algorithm itself by Kennedy and Eberhart in 1995. Many more applications have been explored ever since, including telecommunications, power systems, signal processing, data mining and many more. Till date, there are lots of publications reporting various types of applications of particleswarm algorithm and many more fields. Although PSO has been used mainly to solve single objective problems that are unconstrained, multi-objective optimization problems, problems with dynamically changing landscapes, and to find multiple solutions. 
In this chapter we introduce a new variant of ParticleSwarmOptimization algorithm with N State Markov Jumping (NS-MJPSO). The aim of this method is to improve the global search performance, and its capability of solving the real-world problems. Basically, the real world problems are non-linear and that’s a challenging problem to be solved. It has been revealed in the literature that evolutionary types of methods have a better performance in non-linear problems. The wide use, simple structure, and easy implementation of the ParticleSwarmOptimization (PSO), are the motivation for this work. PSO is a competent, population-based, swarmintelligence technique for optimization. The traditional PSO along-with its modified variants have been explored and re-evaluated. The problem of getting stuck in the local-optimum is observed in PSO evaluation. However, in the NS- MJPSO, we combine PSO algorithm with Markov chain. Markov chain is used to keep the particle moving in the search space with certain probability. The second reason for using Markov chain is also its better performance applications in economic-based, decision and control systems. The proposed method is examined by some widely used, uni-modal and multi-modal mathematical benchmark functions. The evaluation results are then compared with the most cited existing state-of-the-art PSO variants on the same functions. Later this chapter, we present some background work in Section 3.2 , the structure of traditional PSO in Section 3.2.1 . In Section 3.3 , the proposed work is briefly described. The experimental work is given in Section 3.4 and In Section 3.5 , we have summarised the whole chapter.
The application of this method was for power system security assessment, optimization and design of the transmission system, reliability analysis, planning of the distribution system, and load management. Second, the non-LP is introduced for nonlinear objective function and constraints, but the researchers have noticed that it is a difficult field, and also valuable results are only achieved when all constraints are linear, so it is referred as linearly constrained optimization. An extensive use of this technique was in the field of power system voltage security, dynamic security, reactive power control, planning and operation of the power system, optimal power flow, capacitor placement, and unit commitment. Third, stochastic programming is another technique that provides the probability functions of various variables in order to solve the problems that involve the uncertainty. It also has an alternative name which is called dynamic programming. Although this method is widely used for optimization problems, the numerical solution requires a more computational process, which increases the probability of suboptimal results because of the dimensionality problem .
PSO is a population based optimization algorithm put forward originally by Kennedy and Eberhart. It is developed from swarmintelligence and is inspired by social behavior of bird flocking or fish schooling. PSO is an optimization algorithm with implicit parallelism which can be easily handled with the non-differential objective functions. It is based on the natural process in which swarm of particles to share individual knowledge. Bird flocking or fish schooling optimizes a certain objective function. PSO algorithm uses a number of particle vectors moving around in the solution space searching for the optimist solution. Every particle in the algorithm acts as a point in the N- dimensional space. Each particle keeps the information in the solution space for each iteration and the best solution is calculated, that has obtained by that particle is called personal best (pbest). This solution is obtained according to the personal experiences of each particle vector. Another best value that is tracked by the PSO is in the neighborhood of that particle and this value is called gbest among all pbests.
Real-parameter multimodel optimization problems can be solved using Niching particleswarm optimizers (PSOs) in evolutionary computation community. Because of their poor local search ability and requirement of prior knowledge to specify certain niching parameters, the majority of the PSO-based niching algorithms are difficult to follow. This paper proposes a distance-based locally informed particleswarm (LIPS) optimizer, to solve this problem by eliminating the need to specify any niching parameter and enhance the fine search ability of PSO. Each particles are guided with several local bests, instead of using the global best particle. By means of using the information provided by its neighborhoods LIPS can operate as a constant niching algorithm. The neighborhoods are predictable in terms of Euclidean distance. The algorithm is compared with a number of state-of-the-art evolutionary multimodal optimizers. Statistically superior and more consistent performance over the existing niching algorithms on the test functions, without incurring any severe computational burdens are proposed.
This piece of research presents the ParticleSwarmOptimization (PSO) as a biologically inspired computational paradigm searches for problem optimization technique. Specifically (PSO) consists of a swarm of particles, where particle represent a potential solution. More precisely it is a population-based, stochastic algorithm modeled on the social behaviors observed in flocking birds. Over the past quarter century, Global Optimization techniques like Genetic and PSO algorithms has attracted many researchers attention at engineering and industry. In December 2016, a new field titled “Artificial Human Optimization” was introduced in literature. Referring to that innovative field it is clear that the agents in Artificial Human Optimization are Artificial Humans. Recently, a new algorithm titled Multiple Strategy Human Optimization (MSHO) is designed based on Artificial Humans. This paper adopted an interesting novel experimental idea which incorporated Hybridization of perspective concepts of Artificial Human Optimization into some experimental illustrations of PSO algorithms. Additionally, Human Inspired Differential Evolution (HIDE) is recently proposed method which is based on Differential Evolution and MSHO. For particular parameters settings HIDE performed approximately as good as Differential Evolution. In the experiment in this paper, a new algorithm titled “Hassan Satish ParticleSwarmOptimization (HSPSO)” is proposed. HSPSO is tested by applying it on a complex benchmark function. Interesting Hybridization results have been obtained. Results obtained by HSPSO are compared with ParticleSwarmOptimization.
Researchers studied the feeder reconfiguration problems using different methods in the past decades. The results of these researches provide acceptable solutions for feeder reconfiguration problems. Heuristic methods to minimize power losses and improve the searching speed were proposed in (Baran & Wu, 1989). Soft computing approaches were applied to the problem extensively as well, for example, neural network (Kim et al., 1993), simulated annealing (SA) (Chang & Kuo, 1994), genetic algorithm (GA) (Nara et al., 1992; Kitayama & Matsumoto, 1995) and evolutionary programming (EP) (Hsiao, 2004; Hsu & Tsai, 2005). Algorithms based on concept of mimicking swarm intelligent are popular in recent years. For instance, ant colony optimization (ACO) (Teng & Lui, 2003; Carpaneto & Chicco, 2004; Khoa & Phan, 2006) and particleswarmoptimization (PSO) (Chang & Lu, 2002) are the algorithms that can be applied to the field of optimization problems. These algorithms are applied to the problems of power distribution system gradually.
Sequential signal processing has a wide range of applica- tions in many fields such as statistical signal processing , target tracking [2,3], et al.. Currently, there are many filtering algorithm such as EKF , UKF , PF , UPF , and so on. Particle filtering is a young filtering method. Its advantage over other sequential methods is particularly distinctive in situations where the used mod- els are nonlinear and the involved noise processes are non-Gaussian. An important feature in the implementa- tion of particle filters is that the random measure is re- cursively updated. With the random measure, one can compute various types of estimates with considerable ease. Particle filtering has three important operations, sampling, weight computation, and re-sampling. With sampling, one generates a set of new particles that repre- sents the support of the random measure, and with weight computation, one calculates the weights of the particles. Re-sampling is an important operation because without which particle filtering will get poor results. With re-sampling one replicates the particles that have large weights and removes the ones with negligible weights.
Fig. 1 Visualization of Pareto front of SC Model respective best, worst, mean and standard deviation of TC, CC and ShC are recorded in table 1 and 2. Performance evaluation of the solutions obtained using LDIW and GLBIW are recorded in table 3. It is observed that GLBIW variant of PSO algorithm produced more optimal result for individual best solution and also for the mean of ten iterations. Inertia weight computed using GLBIW PSO variant orients each particle of the swarm towards the global best positions it has had till the previous iterations.
popularity in promoting convergence of PSO. The idea of varying coefficients was also extended to dynamically update the acceleration coefficients in . Convergence and stability analyses were proposed by Clerc and Kennedy . Based on the theoretical analysis, the constriction factor has been introduced into PSO to analyze the convergence behavior. Recently, it is shown that adaptive control parameters based evolutionary algorithms perform well on various benchmarks. The adaptive idea was also applied in designing variants of PSOs [16, 21]. In Ref. , an adaptive particleswarmoptimization (APSO) was presented. By evaluating the population distribution and particle fitness, four evolutionary states are defined, which enable the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. It was found that APSO performs well on unimodal and unrotated optimization problems with fast convergence speed. However, when dealing with rotated optimization problems and composite problems, APSO is confronted with premature convergence problem. In the field of adaptive tuning population size, an efficient population utilization strategy for PSO (EPUS-PSO) was presented , adopting an adaptive population manager to improve the efficiency of PSO.
It can be understood from all the above discussions that dynamic optimization problems require an efficient and effective algorithm to solve them. Meta-heuristic methods are among the efficient techniques to address the dynamic optimization problems. One such technique which has gained popularity in the recent years is the ParticleSwarmOptimization (PSO). Particleswarmoptimization algorithms are widely preferred in the recent years over other evolutionary algorithms such as Genetic Algorithms (GAs). PSO involves less number of parameters to adjust and it does not include removal of particles from the population; instead it makes changes to the particles’ locations to arrive at the optimal solution. Unlike GAs, PSO does not involve sharing of information through chromosomes; only the location of the global best is shared among the particles through-out. The efficiency of PSO is generally high compared to other evolutionary algorithms. PSO can locate significant optimal solution in a few function evaluations. This makes it more cost convenient. In addition, the PSO has a flexibility to control the balance between global and local exploration of the search space. This unique feature enhances the search capability of the algorithm and avoids the problem of premature convergence, thus making it more robust. However, a normal PSO algorithm is not sufficient to address the dynamic optimization problems.
An algorithm for solving the unconstrained optimization formulation of regularized image reconstruction. Experiments on a different set of the standard image recovery problems have shown that the proposed algorithm (Weirner with PSO) is much better than previous state-of-art methods. In this research various methods for noise reduction have been analyzed. In the analysis, various well-known measuring metrics have been used. The results show that by using the PDE technique noise reduction is much better compared to other methods. In addition, by using this method the quality of the image is better enhanced. Using PDE the unconstrained image problem can be easily done regularized. The
Maintenance of diversity is also important in finding a good solution in a high-dimensional search domain. De- layed information propagation by sparsely connected neigh- borhoods contributes in this aspect. Other modifications specially aimed for the maintenance of diversity include particle replacement from a dense cluster (Løvbjerg and Krink ), use of repulsion force to avoid particle colli- sion (Krink et al. ), and occasional switching of dynam- ics among the sub-swarms (Al-kazemi and Mohan ). Stochastic switching of particle dynamics with a similar aim is found in Vesterstrøm et al. .
The complex and often coordinated behavior of swarms fascinates not only biologists but also computer scientists. Bird flocking and fish schooling are impressive examples of coordinated behaviors that emerges without central control. Social insect colonies show complex problem-solving skills arising from the actions and interactions of nonsophisticated individuals. Swarmintelligence (SI) systems are typically made up of a population of simple agents interacting locally with one another and with their environment. The agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local interactions between such agents lead to the emergence of a complex global behavior.
A direct application of the conventional PSO algorithm to the optimization of the above FRM FIR digital ﬁlters gives rise to two separate problems. The ﬁrst problem stems from the fact that in the course of optimization, PSO algorithm may lead to candidate FRM FIR digital ﬁlter particles whose multiplier coefﬁcients values no longer conform to the CSD number format (due to random nature of velocity and position of particles). This problem is resolved by generating an indexed look-up table (LUT) of permissible CSD multiplier coefﬁcient values, and by em- ploying the indices of LUT to represent FRM FIR digital ﬁlter multiplier coefﬁcient values. The indices of the LUT conform to the integer number format. Since the integer numbers are closed under the operations of addition and subtraction, PSO is made to search over the permissible CSD values in the LUT during the optimization process. The second problem, on the other hand, stems from the fact that even in case of having an indexed LUT, the par- ticles may go over the boundaries of the LUT in course of PSO due to the limited search space. This paper presents a new method to keep the particles inside the LUT in course
There are PSO variants specialized for solving dynamic optimization problems. Multi-swarm charged PSO and self-organizing scouts PSO were developed for solving dynamic optimization problems . Multi-swarm charged PSO uses charged sub-swarms that repel each other to maintain diversity. Self-organizing scouts PSO splits the swarm if a local optimum is found, one part focusing on the local optimum and the other part on finding another optimum. The cooperative charged PSO is a combination of a cooperative PSO and a multi-swarm charged PSO . Cooperative charged PSO divides the search space by dimensions, and every sub-swarm optimizes the objective values for certain dimensions. Cooperative charged PSO uses a context vector that holds the values of the global best sub-positions. Particles use the context vector to fill dimensions in the position that their sub-swarm does not optimize. Cooperative combinatorial PSO uses the concept of particles that are charged and repel each other as well as particles that are neutral and do not repel each other to increase diversity and counter dynamic changes in the objective function.
The original PSO algorithm which presented by Eberhard and Kennedy  is a powerful algorithm to figure out optimal solution. PSO algorithm imitates the behavior of animal herd to approach to the optimal solution. The basic unit of swarm is particle, which represents a candidate solution of a particular application. Each particle moves around a specified area to search the optimal solution by adjusting their velocity and position. The velocity and position of each particle is influenced by two element, which including personal best solution and global best solution. The personal best solution and global best solution is updated according to the fitness value, which defined by the specified application. After several hundred or more times iteration, particles get close to optimal solution, which is denoted by global best solution. The equation of PSO is shown as Eq. (1) and Eq. (2):
In his paper the author reviews the development of the particleswarmoptimization method in recent years. The most of researchers tried to improve the performance to PSO by changing the inertia weight and acceleration coefficients which balances between local and global search. The inertia weight (w) is used for balancing global and local search. Due to linearly decreasing inertia weight from a large value to small value, the PSO tends to have more global search ability at the beginning of the run while having more local search ability near the end of the run. Momentum factor restrict the particles inside the defined search space without checking the boundary at every iteration. The time varying acceleration coefficients reduce the cognitive component and increase social component. The inertia weight and acceleration coefficient proposed in terms of global best (gbest) and local best (pbest) position of the particles to avoid tuning of parameters. The main advantage of Pf-PSO algorithm is that it does not need a velocity equation and it dose not require any parameters like inertia weight and acceleration coefficient for tuning to reach global best position. In ePSO, the current particle position is updated by extrapolating the global best particle position and the current particle positions in the search space in order to refine the search towards the global optimum value. Therefore above review of PSO helps to researchers new variants of PSO.
Abstract—A modified particleswarmoptimization algorithm is proposed in this paper. In the presented algorithm, every particle chooses its inertial factor according to the approaching degree between the fitness of itself and the optimal particle. Simultaneously, a random number is introduced into the algorithm in order to jump out from local optimum and a minimum factor is used to avoid premature convergence. Five well-known functions are chosen to test the performance of the suggested algorithm and the influence of the parameter on performance. Simulation results show that the modified algorithm has advantage of global convergence property and can effectively alleviate the problem of premature convergence. At the same time, the experimental results also show that the suggested algorithm is greatly superior to PSO and APSO in terms of robustness.