Top PDF Adjustability of a discrete particle swarm optimization for the dynamic TSP

Adjustability of a discrete particle swarm optimization for the dynamic TSP

Adjustability of a discrete particle swarm optimization for the dynamic TSP

It is worth noting that the use of pheromone memory improves the convergence of the DPSO algorithm for the DTSP. If the differences (coordinates of points) between con- secutive subproblems of the DTSP are relatively small, then the knowledge about the previous subproblem that is accu- mulated in pheromone memory makes it easier to find good solutions for a new subproblem. This is particularly visi- ble when the computational budget that has been adapted is small, which confirms that this algorithm is useful when the problem undergoes frequent changes and the time peri- ods between consecutive changes does not make it possible to carry out long calculations. On the other hand, if the problem rarely undergoes modifications, similar quality results can be obtained by using the DPSO algorithm, in which pheromone memory is reset following each modification of the problem and the algorithm execution is equivalent to separate execu- tions of this algorithm for each of the DTSP’s subproblems. Although the MMAS and PACO algorithms produced bet- ter results in a larger number of cases, this advantage is not big, which shows that the DPSO algorithm is compet- itive. Further studies should take into account local search heuristics and focus on solving larger DTSP instances (with thousands of cities). It will also be interesting to use the DPSO algorithm for other dynamic combinatorial optimiza- tion problems, such as the dynamic vehicle routing problem. Acknowledgements This research was supported in part by PL-Grid
Show more

17 Read more

A Modified Discrete Particle Swarm Optimization Algorithm for the Generalized Traveling Salesman Problem

A Modified Discrete Particle Swarm Optimization Algorithm for the Generalized Traveling Salesman Problem

[1], computer file processing [2], order picking in warehouses [3], process planning for rotational parts [4], and the routing of clients through welfare agencies [5]. Furthermore, many other combinatorial optimization problems can be reduced to the GTSP problem [1]. TSP is NP-Hard and hence the GTSP is NP-hard because if the set N of nodes is partitioned into N subsets with each containing one node, it results in a TSP.

23 Read more

Solving Lattice Protein Folding Problems by Discrete Particle Swarm Optimization

Solving Lattice Protein Folding Problems by Discrete Particle Swarm Optimization

No matter in the 2D or 3D HP model, a protein sequence folds itself to form a self-avoiding path. We use absolute directions in a search space to represent the conformation. For the 2D HP model, the absolute four directions are left (L), right (R), forward (F) and backward (B). In the 3D HP model, there are two more directions up (U) and down (D). A direction starts from one cell and points to one of its adjacent cells. The layout approach is similar to that in a travelling salesman problem, when the cells in the lattice are regarded as cities and the connections between cells are considered as arcs. So we can employ the discrete evolutionary algorithms that perform successfully in TSP to deal with the PFP with HP lattice model.
Show more

10 Read more

A Discrete Particle Swarm Optimization Algorithm for Archipelago Berth Allocation Problem

A Discrete Particle Swarm Optimization Algorithm for Archipelago Berth Allocation Problem

Ref.[11] is discovered through simulation of a simplified social and wide applications in a variety of fields. Ref.[12] proposed equations analogous to the classical PSO equations, then, presented a discrete particle swarm optimization algorithm for the problem of scheduling parallel machines. Ref.[13] proposed an improved version of the PSO approach to solve Traveling Salesman Problems (TSP). Ref.[14] introduced a new hybrid algorithmic nature inspired approach based on PSO for the vehicle routing problem. It is tested on a set of benchmark instances and produced very satisfactory results. Ref.[15] presented a meta-heuristic approach to portfolio optimization problem using PSO technique. The PSO model demonstrated high computational efficiency in constructing optimal risky portfolios. Ref. [16] studied the dynamic routing problems and proposed several effective algorithms. Ref. [17] proposed HPSO which adds particles neighbor information to diversify the particle swarm to enhance the convergence speed. Ref. [18] designed a modified PSO which can enhance the quality and speed of the particle evolution.
Show more

9 Read more

Transmission Expansion Planning – A Multiyear Dynamic Approach Using a Discrete Evolutionary Particle Swarm Optimization Algorithm

Transmission Expansion Planning – A Multiyear Dynamic Approach Using a Discrete Evolutionary Particle Swarm Optimization Algorithm

Regarding the reliability evaluation, the developed approach penalizes plans in which the PNS is non-zero for configurations of the network associated to N-1 contingencies and for a selected number of N-2 contingencies. This follows the indications in the Grid Codes of several countries that explicitly indicate that the system should be able to supply the demand for all N-1 contingencies and for a number of N-2 contingencies selected according specific criteria. This evaluation can be modified, extending the number of configurations to analyse or, in the limit, to run a Monte Carlo simulation for every particle. This strategy would lead to a dramatic increase of the computation time. These penalty terms are introduced in the objective function of the problem using large values for the penalty coefficients.
Show more

14 Read more

Discrete Particle Swarm Optimization Algorithm for Flowshop Scheduling

Discrete Particle Swarm Optimization Algorithm for Flowshop Scheduling

To find exact solution for such combinatorial problems, a branch and bound or dynamic programming algorithm is often used when the problem size is small. Exact solution methods are impractical for solving FSP with large number of jobs and/or machines. For the large-sized problems, application of heuristic procedures provides simple and quick method of finding best solutions for the FSP instead of finding optimal solutions. A heuristic is a technique which seeks (and hopefully finds) good solutions at a reasonable computational cost. A heuristic is approximate in the sense that it provides a good solution for relatively little effort, but it does not guarantee optimally. A heuristic can be a rule of thumb that is used to guide one’s action. Heuristics for the FSP can be a constructive heuristics or improvement heuristics. Various constructive heuristics methods have been proposed by Johnson, 1954; Palmer, 1965; Campbell et al., 1970; Dannenbring 1977 and Nawaz et al. 1983. Literature shows that constructive heuristic methods give very good results for NP-hard combinatorial optimization problems. This builds a feasible schedule from scratch and the improvement heuristics try to improve a previously generated schedule by applying some form of specific improvement methods. An application of heuristics provides simple and quick method of finding best solutions for the FSPs instead of finding optimal solutions (Ruiz & Maroto, 2005; Dudek et al. 1992). Johnson’s algorithm (1954) is the earliest known heuristic for the FSP, which provides an optimal solution for two-machine problem to minimize makespan. Palmer (1965) developed a very simple heuristic in which for every job a ‘‘slope index’’ is calculated and then the jobs are scheduled by non-increasing order of this index. Ignall & Schrage (1965) applied the branch and bound technique to the flowshop sequencing problem. Campbell et al. (1970) developed a heuristic algorithm known as CDS
Show more

29 Read more

Adaptive particle swarm optimization

Adaptive particle swarm optimization

In addition to research on parameter control and auxil- iary techniques, PSO topological structures are also widely studied. The LPSO with a ring topological structure and the von Neumann topological structure PSO (VPSO) have been proposed by Kennedy and Mendes [45], [46] to enhance the performance in solving multimodal problems. Further, dynam- ically changing neighborhood structures have been proposed by Suganthan [37], Hu and Eberhart [47], and Liang and Suganthan [48] to avoid the deficiencies of fixed neighbor- hoods. Moreover, in the “fully informed particle swarm” (FIPS) algorithm [49], the information of the entire neighborhood is used to guide the particles. The CLPSO in [12] lets the particle use different pBest’s to update its flying on different dimen- sions for improved performance in multimodal applications.
Show more

21 Read more

PSO-Particle Swarm Optimization

PSO-Particle Swarm Optimization

Aby mohlo hejno plně konvergovat k lokálnímu optimu, je potřeba omezit diverzitu. Protože křížením se částice dostane do úplně jiné vzdálené pozice, a tedy je zvýšena rychlost částice, bylo potřeba upravit omezující faktor χ. Experimenty ukázaly, že nejlepší výsledky algoritmus dosahuje při χ = 0.95 ∙ 0.792 = 0.752 a křížení se provádí jednou za 100 iterací. Oproti standardnímu PSO je dosaženo zlepšení na většině multimodálních prohledávacích prostorů, kde nová řešení dosažená po křížení mohou pomoci algoritmu uniknout z lokálního optima. Zlepšení oproti PSO s křížením aktuálních pozic (x) je konzistentní na mulitmodálních problémech, dokonce výrazné na unimodálních vysoce dimenzionálních problémech. Celkový výkon (na více funkcích) je lepší u PSO s pBest křížením než u locust swarm PSO (viz kapitola 3.5.16).
Show more

108 Read more

Particle Swarm Optimization - A Survey

Particle Swarm Optimization - A Survey

This paper reviewed the progress of research on Particle Swarm Optimization (PSO). It is obvious that only a tiny portion of the literature could be referred to, and application papers had to be totally omitted. Research on PSO, to im- prove its performance, to modify it for various objectives, and to apply it to various problems, is so divergent (an ex- plosion, so to say). Over 1600 articles can be found in IEEE publications alone (http://ieeexplore.ieee.org), over 450 ar- ticles are found in CiNii database which reflects research ac- tivity in Japan (http://ci.nii.ac.jp/en), and more than 177000 web pages in relation with PSO exist throughout the world, according to Google (http://www.google.com). These num- bers reflect the ever-increasing interest of researchers and engineers in PSO.
Show more

9 Read more

Cultural Particle Swarm Optimization

Cultural Particle Swarm Optimization

scheme can be either random or in a sequential order. Ray and Liew [53] used Pareto dominance and combined concepts of evolutionary techniques with the particle swarm. This algorithm uses crowding distance to preserve diversity. Hu and Eberhart [54] in their dynamic neighborhood PSO proposed an algorithm to optimize only one objective at a time. The algorithm may be sensitive to the optimizing order of objective functions. Fieldsend and Singh [55] proposed an approach in which they used an unconstrained elite archive to store the nondominated individuals found along the search process. The archive interacts with the primary population in order to define local guides. Mostaghim and Teich [56] introduced a sigma method in which the best local guides for each particle are adopted to improve the convergence and diversity of the PSO. Li [57] adopted the main idea from NSGA-II into the PSO algorithm. Coello Coello et al. [58], on the other hand proposed an algorithm using a repository for the nondominated particles along with adaptive grid to select the global best of PSO. The algorithms proposed to solve MOPs using PSO are based upon promoting the nondominated particles at any given time, not exploiting the information of all particles in the population.
Show more

308 Read more

Dynamic Economic Dispatch Assessment Using  Particle Swarm Optimization Technique

Dynamic Economic Dispatch Assessment Using Particle Swarm Optimization Technique

This paper presents the application of Particle Swarm Optimization (PSO) technique for solving the Dynamic Economic Dispatch (DED) problem. The DED is one of the main functions in power system planning in order to obtain optimum power system operation and control. It determines the optimal operation of generating units at every predicted load demands over a certain period of time. The optimum operation of generating units is obtained by referring to the minimum total generation cost while the system is operating within its limits. The DED based PSO technique is tested on a 9- bus system containing of three generator bus, six load bus and twelve transmission lines.
Show more

7 Read more

Small world network based dynamic topology for particle swarm optimization

Small world network based dynamic topology for particle swarm optimization

improvements such as gradually increasing the local neighborhood, time varying random walk and inertia weight values and two alternative schemes for determining the local optimal solution for every individual [9]. However, this method adds additional time consumption when calculating the distance, thus it increase the complexity of the algorithm. In 1969, Travers and Milgram published ‘Six Degrees of Separation’ the phenomenon [14][21]. In [10], Watts and Strogatz had proposed the concept of classic ‘small world’ network model and its construction method. They have shown that the characteristics of the network connections can influence the velocity of the information flowing [6][11]. ‘Small world’ networks have since been observed in many real-world problems, such as data clustering, optimization of oil and gas field development planning, linear programming, reactive power optimization, computer science, networks of brain neurons, telecommunications, mechanics, and social influence networks [17][18][19][20]. Since the particle swarm optimization was inspired by nature, this paper postulates that it is possible to enhance optimization performance if the ‘small world’ network topology is used as part of the particle swarm optimization process. A Dynamic Topology Particle Swarm Optimization based on ‘Small World’ network (DTSWPSO), which imitates information dissemination ‘small world’ networks by dynamically adjusting the neighborhood topology, is proposed. Moreover, a varying neighborhood strategy can effectively coordinate the exploration and exploitation ability of the algorithm. The DTSWPSO is compared with the classic topology version, using the dynamic ‘small world’ neighborhood structure.
Show more

6 Read more

Applications of Particle Swarm Optimization

Applications of Particle Swarm Optimization

The U.S. Military is investigating swarm techniques for controlling unmanned vehicles. The European Space Agency is thinking about an orbital swarm for self-assembly and interferometer. NASA is investigating the use of swarm technology for planetary mapping. A 1992 paper by M.Antony Lear’s and George A.Bekey discussed the possibility of using swarm intelligence to control nanobots within the body for the purpose of killing cancer tumors.

5 Read more

Overview of Particle Swarm Optimization

Overview of Particle Swarm Optimization

ABSTRACT: This paper proposes the application of Particle Swarm Optimization (PSO) technique to solve Optimal Power Flow with inequality constraints on Line Flow. To ensure secured operation of power system, it is necessary to keep the line flow within the prescribed MVA limit so that the system operates in normal state. The problem involves non-linear objective function and constraints. Therefore, the population based method like PSO is more suitable than the conventional Linear Programming methods. This approach is applied to a six bus three unit system and the results are compared with results of Linear Programming method for different test cases. The obtained solution proves that the proposed technique is efficient and accurate.
Show more

9 Read more

A. Binary Particle Swarm Optimization

A. Binary Particle Swarm Optimization

Abstract —Feature selection is a useful technique for increasing classification accuracy. The primary objective is to remove irrelevant features in the feature space and identify relevant features. Binary particle swarm optimization (BPSO) has been applied successfully in solving feature selection problem. In this paper, chaotic binary particle swarm optimization (CBPSO) with logistic map for determining the inertia weight is used. The K-nearest neighbor (K-NN) method with leave-one-out cross-validation (LOOCV) serves as a classifier for evaluating classification accuracies. Experimental results indicate that the proposed method not only reduces the number of features, but also achieves higher classification accuracy than other methods.
Show more

6 Read more

Handling Multi Objectives of with Multi Objective          Dynamic Particle Swarm Optimization

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Abstract— The transportation network is used widely in various areas such as communication, topological configuration, transportation and many more. The basic functionality of the network is to provide low cost and less time to make source to access the desired location which is remotely located. The important factors which have to be determined by network topology are Reliability and Cost. Network topology contains the nodes and the links between the nodes. The selection of the optimal path in the network topology is NP hard problem. But the selection grows complex with the classical enumeration and the methods for the selection of optimality. Thus for the solution, an evolutionary method is required so that both objectives of the network designing topology are fulfilled and satisfied. Due to above presented characteristics; it is found that transportation requires an algorithm which can select the optimal network topology which takes less time and low cost but should be highly reliable. In this paper, MODPSO algorithm is suggested. It is meta-heuristic algorithm which is inspired by the natural environment and uses evolutionary method. The Particle Swarm Optimization algorithm is implemented dynamic nature i.e. values can be changed during the experimentation is running which forms it Dynamic Particle Swarm Optimization. It is considered as enhancement from the all traditional methods and basic PSO framework implementation. The paper describes the dynamic selection of leaders and uses variant for optimization of multi-objectives. It also provides an efficient solution mechanism to calculate reliability and cost. It is termed as efficient and effective way to optimize network topology. Under the simulation environment results obtains are satisfactory and on comparison it is shown that the optimized results are better than the basic PSO framework.
Show more

7 Read more

Discrete particle swarm optimisation for ontology alignment

Discrete particle swarm optimisation for ontology alignment

The presented DPSO differs from the approach by Correa et al. mainly in two aspects. Firstly, the size, i.e. dimensionality of each particle is updated in each iteration, where in the approach of Correa et al. each particle is given a randomly chosen size, which does not change throughout the iterations. In their approach this is reasonable seeing that in their experiment [2] the authors used a population size, which is much larger than the number of possible particle lengths. For the problem of ontology alignment the number of possible particle lengths can be much larger, since it depends on the size of the input ontologies, i.e. their number of classes and properties. It might thus become difficult to in- crease the population size accordingly, which makes it necessary to dynamically adjust the particle lengths in order to find the optimal size of an alignment. The second aspect in which this approach differs from the one of Correa et al. is the particle update procedure. In this approach, the change of a particle’s configuration does not only depend on the configuration of the personal best and global best, but also on the evaluation of the single correspondences. This is not possible in the use case of attribute selection for a classifier, as attributes cannot be evaluated independently.
Show more

38 Read more

Discrete Multi Objective Particle Swarm Optimization Algorithm for FPGA Placement (RESEARCH NOTE)

Discrete Multi Objective Particle Swarm Optimization Algorithm for FPGA Placement (RESEARCH NOTE)

Placement process is one of the vital stages in physical design. In this stage, modules and elements of the circuit are placed in distinct locations based on optimizationprocesses. Hence, each placement process influences one or more optimization factor. On the other hand, it can be statedunequivocally that FPGA is one of the most important and applicable devices in our electronic world. So, it is vital to spend time forbetter learning of its structure. VLSI science looks for new techniques for minimizing the expense of FPGA in order to gain better performance. Diverse algorithms are used for running FPGA placement procedures. It is known that particle swarm optimization (PSO) is one of the practical evolutionary algorithms for this kind of applications. So, this algorithm is used for solving placement problems. In this work, a novel method for optimized FPGA placement has been used. According to this process, the goal is to optimize two objectives defined as wire length and overlap removal functions. Consequently, we are forced to use multi-objective particle swarm optimization (MOPSO) in the algorithm. Structure of MOPSO is such that it introduces set of answers among which we have tried to find a unique answer with minimum overlap. Itis worth noting that discrete nature of FPGA blocks forced us to use a discrete version of PSO. In fact, we need a combination of multi-objective PSO and discrete PSO for achieving our goals in optimization process. Tested results on some of FPGA benchmark (MCNC benchmark) are shown in “experimental results” section, compared with popular method “VPR”. These results show that proper selection of FPGA’s size and reasonable number of blocks can giveus good response.
Show more

9 Read more

Introductory Chapter: Swarm Intelligence and Particle Swarm Optimization

Introductory Chapter: Swarm Intelligence and Particle Swarm Optimization

Particle swarm optimization (PSO) is accepted as the second population-based algorithm inspired from animals. Since James Kennedy (a social psychologist) and Russell C. Eberhart simulated the bird flocking and fish schooling foraging behaviors, they have used this simula- tion to the solution of an optimization problem and published their idea in a conference in 1995 [3] for the optimization of continuous nonlinear functions. There are two main concepts in the algorithm: velocity and coordinate for each particle. Each particle has a coordinate and an initial velocity in a solution space. As the algorithm progresses, the particles converge toward the best solution coordinates. Since PSO is quite simple to implement, it requires less memory and has no operator. Due to this simplicity, PSO is also a fast algorithm. Different versions of PSO have been developed, using some operators since the first version of PSO was published. In the first versions of PSO, the velocity was calculated with a basic formula using current veloc- ity, personal best and local best values in the formula, multiplying stochastic variables. The cur- rent particle updates its previous velocity, not only its previous best but also the global best. The total probability was distributed between local and global best using stochastic variables. In the next versions, in order to control the velocity, an inertia weight was introduced by Shi and Eberhart in 1998 [4]. Inertia weight balances the local and global search ability of algo- rithm. Inertia weight specifies the rate of contribution of previous velocity to its current velocity. Researchers made different contributions to the inertia weight concept. Linearly, exponential or randomly decreasing or adaptive inertia weight was introduced by different researchers [5]. In the next version of PSO, a new parameter called constriction factor was introduced by Clerc and Kenedy [6, 7]. Constriction factor (K) was introduced in the studies on stability and convergence of PSO. Clerc indicates that the use of a constriction factor insured convergence of the PSO. A comparison between inertia weight and constriction factor was published by Shi and Eberhart [8]. Nearly all engineering discipline and science problems have been solved with PSO. Some of the most studied problems solved with PSO are from Electrical Engineering, Computer Sciences, Industrial Engineering, Biomedical Engineering, Mechanical Engineering and Robotics. In Electrical Engineering, power distribution problem [9] is solved with PSO. Another most stud- ied problem in Electrical Engineering is economic dispatch problem [10, 11]. In Computer Sciences, face localization [12], edge detection [13], image segmentation [14], image denoising
Show more

9 Read more

Genetic based discrete particle swarm optimization for elderly day care center timetabling

Genetic based discrete particle swarm optimization for elderly day care center timetabling

Next, we have to check the violation of constraints associated with each of the particles. Since each cell of the particle denotes an assignment of one event to one timeslot, such assignment may or may not violate any constraints. If the highest cost of constraint violation is found for such assignment, the objective value of the particle may be significantly minimized by adjusting this particular assignment (i.e., allocate that event to another timeslot). Thus, we have to capture the cost of violation of constraints for all cells of a particle during the search process. This task involves attaching a new data structure called “template” to each particle (Figure 3). Suppose i-th template for i-th particle is denoted by 𝑇𝑒𝑚𝑝 𝑖 𝑡 , then 𝑇𝑒𝑚𝑝 𝑖,𝑗 𝑡 stores the total cost of constraints violated by 𝑥 𝑖,𝑗 𝑡 . This cost has two components, such as the number of hard constraint violations multiplied by a large constant (denoted by W, say 10 6 ) and the number of soft constraint violations multiplied by 1. For ease of representation, the events are omitted from particle representation in the following parts. Only their notation 𝑥 𝑖,𝑗 𝑡 is shown.
Show more

34 Read more

Show all 10000 documents...