Introduction To Swarm Intelligence And Its Application In Optimisation

Full text

(1)

I

-~---~

!

TEKBAC

TEKBAC GROUP OF COMPANIES

Australia

Singapore

Malaysia

Co

nnecting

ti

e lVorld th

rough E

m

erging Technologies

Silah Hayati Binti Kamsani (Dr)

has successfully completed a 12-hour

professional development programme

on

INTRODUCTION TO SWARM

INTELLIGENCE

.

AND ITS

APPLICATION IN OPTIMISATION

On 3

&

4 April 2017

----~---~-~---~---

-

----~-~---~---Professor An ries Engelbrecht

(2)

I ...

TEKBAC Group of Companies

Australia Singapore Malaysia

Connecting the World through Emergit1g Techno/ogi£s

Professional Seminars since 2001

PROGRAMME

Dayl

9.00

am

to 10.30

am

10.30 am to 10.45 am

10.45 am to 12.30 pm

12.30 pm to 1.45 pm

1.45 pm to 3.30 pm

3.30 pm to 3.45 pm

3.45 pm to 5.00 pm

Day2

9.00

am

to 10.30 am

10.30 am to 10.45 am

10.45

am

to 12.30 pm

12.30 pm to 1.45 pm 1.45 pm to 3.30 pm

3.30 pm to 3.45 pm

3.45 pm to 5.00 pm

Session 1

Tea/coffee break

Session 2

Lunch

Session 3

Tea/coffee break

Session 4

Session 5

Tea/coffee break

Session 6

Lunch

Session 7

Tea/coffee break

Session 8

Workshop Evaluation

End of Workshop

WORKSHOP ON INTRODUCTION TO SWARM INTELLIGENCE AND ITS APPLICATION IN OPTIMISATION BY PROFESSOR ANDRIES ENGELBRECHT

(3)

,

.-TEKBAC Group of Companies

Australia Singapore Malaysia

Connecting the World through Emerging Technologies Professional Seminars since 2001

WORKSHOP OUTLINE

Day 1: How does it work?

Topic 1: Computational Swarm Intelligence

• What is swarm intelligence? • What is collective intelligence? • Emergent behavior

Topic 2: Particle Swarm Optimization • Origins of particle swarm

optimization

• The basic foundations of particle swarm optimization

• On the convergence behavior of particle swarm optimization • Particle swarm optimization

algorithms

• Heterogeneous particle swarm optimization

Topic 3: Ant Algorithms

• Foraging behavior of ants and the ant colony meta-heuristic

• Cemetery formation behavior of ants and its application to sorting and clustering

• Division of labor behavior of ants

Day 2: How do I use it to solve optimization problems?

Topic 4: Optimization Theory

• Formal definitions of optimization problems

• Review of different classes of optimization problems

Topic 5: Using Particle Swarm Optimization to Solve Optimization Problems

• Unconstrained optimization problems

• Discrete-valued versus continuous-valued optimization problems • Constrained optimization problems • Multimodal optimization

• Dynamic optimization problems • Multi-Objective Optimization • Dynamic Multi-Objective

Optimization

Topic 6: Using Particle Swarm Optimization to Solve Real-World Problems

• How to solve a continuous optimization problem?

• How to train a neural network? • How to cluster data?

• How to do DNA sequence alignment?

• How to train a game agent?

Topic 7: Using Ant Colony Optimisation to Solve Optimisation Problems

• Problem Requirements

• Continuous-Valued Optimisation Problems

• Multi-Objective Optimisation Problems

• Dynamic Optimisation Problems • Solving the Travelling Salesman

Problem

WORKSHOP ON INTRODUCTION TO SWARM INTELLIGENCE AND ITS APPLICATION IN OPTIMISATION BY PROFESSOR ANDRIES ENGELBRECHT

(4)

.

\.

.

.

Topic

1:

TEKBAC Group of Companies

Australia Singapore Malaysia

Connecting the World through Emerging Technologies

Professional Seminars since 2001

Computational Swarm Intelligence

WORKSHOP ON INTRODUCTION TO SWARM INTELLIGENCE AND ITS APPLICATION IN OPTIMISATION BY PROFESSOR ANDRIES ENGELBRECHT

(5)

Introduction to Computational Swarm Intelligence

and Its Application in Optimization

AP Engelbrecht

Computational Intelligence Research Group (CIRG) Department of Computer Science

University of Pretoria South Africa

engel@cs.up.ac.za http://cirg.cs.up.ac.za

Topic 1: Computational Swarm Intelligence

~ What is Swarm Intelligence?

• What is Computational Swarm Intelligence? • Computational Swarm Intelligence Paradigms

Topic 2: Particle Swarm Optimization

Origins of Particle Swarm Optimization

• Basic Foundations of Particle Swarm Optimization • Convergence Behavior of Particle Swarm Optimization • Particle Swarm Optimization Algorithms

• Heterogeneous Particle Swarm Optimization • Self-Adaptive Particle Swarm Optimization

Topic 3: Ant Algorithms

• Foraging Behavior of Ants

• Cemetery Formation Behavior of Ants • Division of Labor Behavior of Ants

Topic 4: Optimization Theory

'

-~r¥B::TU• o• .. ,., ... ~~·nn•n "' unlln• ..., .!.'f.~.'.~.~-~-'.!.~.!.!l.!J'.'tU!tf

Andries Engelbrecht received the Masters and PhD degrees in Computer Science from the University of Stellenbosch, South Africa, in 1994 and 1999 respectively. He is a Professor in Computer Science at the University of Pretoria, and serves as Head of the department. He also holds the position of South African Research Chair in Artificial Intelligence. His research interests include swarm intelligence, evolutionary computation, artificial neural networks, artificial immune systems, and the application of these Computational Intelligence paradigms to data mining, games, bioinformatics, finance, and difficult

optimization problems. He is author of two books, Computational Intelligence: An Introduction and Fundamentals of Computational Swarm Intelligence. He holds an A-rating from the South African National Research Foundation.

• Optimization Problems and Algorithm Classes • Formal Definitions of Optimization Problem Classes

Topic

5:

Using Particle Swarm Optimization to Solve Optimization

Problems

• Boundary Constrained Optimization Problems • Discrete-Valued Optimization Problems • Multi-Solution Optimization Problems • Dynamic Optimization problems • Constrained Optimization Problems • Multi-Objective Optimization Problems

• Dynamic Multi-Objective Optimization Problems

Topic 6: Using Particle Swarm Optimization to Solve Real-World

Problems

• Function Optimization • Data Clustering

ca Doing Multiple Sequence Alignment .._.,. Si, • rnm""""''"'°'" U•:vr~n'1 ~· titU~ll• Y.V.~!..1.f.W.!!:!..!.~ •• !.~.!:!.;:;.~.!~ 1

(6)

Presentation Outline Ill

9

Topic 1: Comp

ional Swarm Intelligence

• Training A Neural Network

• Training A Game Agent

Topic 7: Using Ant Colony Optimization to Solve Optimization

Problems

• Problem Requirements

• Continuous-Valued Optimization Problems • Multi-Objective Optimization Problems • Dynamic Optimization Problems

• Solving the Traveling Salesman Problem

' ~!\"lt~:.~i~! f!~,'1\rra:~:

..., !.~.'!.'.~.r.~.11.lf.!J.t,.JJ;!Illl;

• What is swarm intelligence?

• What is computational swarm intelligence?

• Computational swarm intelligence paradigms

.\\" rl',,."- ..J ~">2)

·, ./\

s

v1.,Lv

J<.U

(/"'

,,,

, .

\"'

~ IF-*l~Ull!~<l•utrrt"I~ ~ i~JJW,'.;1.~i~

En~elbrechl (University of Pretoria) CornpulaUonal Swar1n lntelHgence Swarm fntelbgcncu. 2016 5 .< 285 Engelbrecht (University of Prelorial Cornputational Swarm Intelligence Swarm tnlclli~ence, 2016 6 / 285

What is Swarm Intelligence?

What is Swarm Intelligence?

Introduction Swarm •

• In nature, we find many examples of complex organisims that consist of collections of very simple individual organisms

• Each individual organism can not survive on its own, but collectively a complex organism is formed that exhibits very complex behaviors

• Many computational models of such complex biological organisms have been developed and have been used to solve very difficult real-world problems

• These computational models are usually very simple, as only the individual simple behaviors of the member organisms need to be modeled

!

ilirl¥.Hlf.1

.

~r~

~

wcm

What is a swarm?

• A group/collection of (generally mobile) agents which

communicate with each other (either directly or indirectly) by acting on their local environment

• Each individual agent is a very simple stimulus/response agent,

which then

ia sense immediate (local) stimuli,

• perform a simple action, which

• effects a change in the local environment

Engelbrechl (Uni"ersily ol Pretoria) Compu1a11onel Swarm lnlelllgem:e Svr.um lnteUlgcnce, 2016 7 I 285 Engelbrechl (Unlvers•ty of Protoria) Computaltonal Swarm lntcn1gcncn Swwnt lnlelligenco. 2016 81285

,.

(7)

What is swarm intelligence?

• Interaction among agents results in distributive collective problem-solving behavior

• Swarm intelligence refers to the problem-solving behavior that emerges form such interaction among such simple agents

• Formally: Swarm intelligence is the property of a system whereby the collective behaviors of unsophisticated agents interacting locally with their environment cause coherent functional global

patterns to emerge " l J.:.'l. ht.1. .. 1.- ··...r

?> 0l , \ ' ~ .... ·"'- \1 t/ \

• Also referred to as collectivej11tellig~nc.e (',,-\' ~"'') J

a Self-organized, no centralized control

CJ

V'

v.\,.._

'\::

~

~~(.~~~

• Ant Colony Meta-Heuristic '\ • Cat Swarm Algorithm

• Particle Swarm Optimization ) • Wolf Search Algorithm

• Bacterial Foraging Optimization

• Artificial Bee Colony Optimization

• Firefly/Glow Worm Optimization

• Fish School Optimization • Cuckoo Search Algorithm • Bat Algorithm

• Krill Herd Algorithm

• Monkey Search

• Eagle Swarch Strategy

• Wasp Swarm Algorithm

• Bean Optimization Algorithm

• Dolphins Herds Algorithm

• Fireworks Algorithms

• Feral-Dogs Herd Algorithm

Ji Hll"l""!l1'1 •.o• tnealM J!. mr;;;.,;. ~; ~m:w

Computational swarm intelligence refers to algorithmic models of such , .\ :'-"'"'

collective behavior seen in biological systems >, l~.i. ~ "" ..:,.-.. '~

C-"'''

' "-) (V\ o~ \

•-\' ~ ,...., JV"'f ( ' J, l.

S'

\... \'•

JL·. (\,+'-

I

l\ e..'\"'./'""'' .

G\o\t...

1 \'---()

CJ'-·

~

,

1-A,

-'

"-·

s;\

M.·.£..

~

(-0

' \'

~.\\~

• Gravitational Search Algorithm

Intelligent Waterdrops Algorithm

• Invasive Weed Optimization • Jumping Frogs Optimization

Locust Swarms • Magnetic Optimization

Algorithm

• Mosquito Swarms Algorithm • Rats Herds Algorithm • River Formation Dynamics

-

~

&-.~

·

\Q>

j

-H_-0

~)

fb\A..A-,

a->L--"<J-\"

r_,

• Roach Infestation Optimization

• Zooplankton Swarms Algorithm

• Stochastic Diffusion Search

• Fungus Search Algorithm

• Dolphin Swarm Algorithm

• Wild Monkey Foraging Algorithm

- - - .. 1 .;-.. ,:.,. \ (

~ \.11 ~l ~

Desert Ant £foraging

~

i

,,...

~

L ....,,., ·" ·-... { "

- . e• ""·-...

Algorithm _.., ''' .J•

' ~, l"'t. ,v. ()

'j

.

~li'A-t:.d

j

.-., ... u.,)11•..,.•uuo..o 3

,~nnu •RI,,.,,. ... ... lJ.! . .t•m •• h trrlU!!

n1 , ' Pref •i • Com utaflonal Swarm lntetll ence Swarm lntell r.ai. 2016 11 / 235 Engelbrceht !University of Pretoriaj ompula~onal warm nte 111er1ce warm ma ligel'CO, 16 12 '285

(8)

.

"

)

-/

l)

Topic 2:

TEKBAC Group of Companies

Australia Singapore Malaysia

Connecting the World through Emerging Technologies Professional Seminars since 2001

Particle Swarm Optimisation

WORKSHOP ON INTRODUCTION TO SWARM INTELLIGENCE AND ITS APPLICATION IN OPTIMISATION BY PROFESSOR ANDRIES ENGELBRECHT

(9)

• Origins of Particle Swarm Optimization

• Basic Foundations of Particle Swarm Optimization

• Convergence Behavior of Particle Swarm Optimization

• Particle Swarm Optimization Algorithms

• Heterogeneous Particle Swarm Optimization

• Self-Adaptive Particle Swarm Optimization

What are the origins of PSO?

• In the work of Reynolds on "boids" • collision avoidance

• velocity matching • flock centering

• The work of Heppner and Grenander on using a "rooster" as

attractor of all birds in the flock \_.., ~'\} ~ ~ -.., , .-.~( ·

· ·I

u..~·r-.i. ~~~·i't'.'{',j',~ ;: .. ~-:;::·:~ ~ ~ff IU!ltl.Jl,Onl .... ~I

Particle swarm optimization (PSO):

• developed by Kennedy & Eberhart,

• first published in 1995, and

• with an exponential increase in the number of publications since then.

What is PSO?

• a simple, computationally efficient optimization method

• population-based, stochastic search

'\'_·,

• individuals follow very simple behaviors: \ , 0.,c, ' r , .. ·.

• emulate the success of neighboring individuals, ~- ~,I).,~_. 1.. '

f

1) · , '1

I ~ t, ..

11 but also bias towards own experience of success " • emergent behavior: discovery of optimal regions within a high

dimensional search space

• Original PSO is a simplified social model of determinirig nearest

neighbors and velocityJTI.atcbing __

• Initial objective: to simulate the graceful, unpredictable

choreography of collision-proof birds in a flock • Randomly initializes positions of birds

11 At each iteration, each individual determines its nearest neighbor and replaces its velocity with that of its neighbor

e This resulted in synchronous movement of the flock, but flock

settled too quickly on an unanimous, unchanging flying direction

~ ~.m;.r.1.:1~:fil[u.m 4

(10)

• Random adjustments to velocities (referred to as craziness) prevented individuals to settle too quickly on an unchanging direction

• To further expand the model, ~ers were added as _?ttractors:

• personal best

1-

.

./

"

·

• neighborhood best

I

/

---+ particle swarm optimization

What are the main components?

• a swarm of particles

• each particle represents a candidate solution

• elements of a particle represent parameters to be optimized

The search process:

• Position updates . J

~

~

-

~?

~

i-Q..

'"~,...!

rv..,-

,/

t-.,...\,,... >..

",<''\

.r-' -~

"' " c

X;(

t

+

1)

=

X;{t)

-f-

V;(t

+

1

),

~if,0)_"' U(Xminj, Xmaxj)

PSO has been developed to solve multi-dimensional optimization problems

Original PSO was developed to solve optimization problems of the following type:

• boundary constrained

•XE Rnx

• single objective function

,.

• stationary environments

~ \M):.~

,

,

"''

·

v Of~· . ~ --~.;-~·

-, -, ... -~·- 0

~

.

.

.

..

_

.. ,.. ,

.

.,

.

--t> \,i • .I-- b :!'.

Social network structures are used to determine best

positions/attractors ~ J':\w.-l w·..Ll

C--

v

+\

'-'

-

l,

r

(J_,.__~w .~

"""

~ \ •. ·, I ~ (

~ ,-;; \._ t

__)) t--=--f:J'=-• - ) y

l d~l,...

L.. (,\ f i'

r~l'"'l\)i-1

~ ;.o·,«•" . .

0-\,-J ;. ; (:

, , 1 -~ ~ ,·r.

I . ( . ) ' ~ -,,,,\) .,), O").

y

?"\

\

y

(\

\

'..

f

• Veoc1ty

steps1~e

-

,.,1

!-.J. . .u ' ,

~

~

"\1.~~

l,t,,..,-l-.•

·

-;Cirives the-optimization process '1"' . , '

0\...t\.>l"l"

• step size . . . Star Topology Ring Topology

5{{

~

r

~i

• reflects experienti_~I knowledge an_d socially e_~chang.!'lc:! .. 111to!m<!_t~n _ .f~~ t... ~J<..

1

~1." .~.., +...__

-i

i

l .IA-"-

0

/

·

·

-)2.f Dr\,J.. v

- .. -- I ~..,.. yi..~

...

i..-h·- . I

O;YllllHl!HWrH>-l>t<L , v

.._ :.~:\wnw:~-.~~1~:::1

l

!~_,!!mL:1.i~~.m.u;~ 5

j

.g ~;i~•v• :.iw=

. t . : . . . . • •• • • • • ' •. ' ' t : t : .(-... V ' . r:

. - . . . . . ' . • ' . . . - \l'l·t..Jv_tj ~-...

(11)

I - •' iI . I -,. l l / ' ( • ' f ---..: ~~~,. ..p -·/,

-11-·

J_v

·

-1.c·

l

~-<;<);-~

.

.;:..

,.,

< ;'

· .\.. ) . I .... ,¢l·f·:·;,-<il· ."-1

, , _,,_ _ ·, 1

t

1 l ''.) 1 • , • uses the star social network l bLr

•)- -i-< "° -,l ) ' 0 ~

r' • t I ) 4 · t ,.;.-

-~

0

1

·I

.2°L

j

·

)'

.'

I

;·~,)

\

.

J·~~t,-\/>f ·~l~~·

--

1

·

.i:ll~·

•velocity update per dimension: 0 1

.:q-o

2

__.,,vi-l

J.

,

v ":I."' '" l l) r. · ·

1.~S{

i'. •• I

~ ·

vc.c.-"-"'\ ( \

\•

--1

i -:.·

·-r-(

,..,.

'

'

r I.' .

[

f •

--. }-' ...

(

,l-" ~ - ) • QI._. - A

l .

,)·

]

..

0

1

.

·,.1.i

'";.'::\'C

1i'

-

~f~·---

·

vii(t

+

1)

=

~

+

c1 r1}}t)[yij(t)

~

Xij(t)]

+

~r2j(t)[~

(t)

- x?(t)]

u ' - , >. Lr ,5

~

·--;·. co-]11' 1'v

t.

i '..i

,,-1

J..~le

"!'- ' °':'. . - \ '<.. ~

(a) Von Neumann (b) Pyramid (c) 4 Clusters ,..--•

Vij(O)

=

0 (preferred)

.

'J

""-i

1;

o\""'

-)

:iu-\c..v :i .~ i,. ._,

0 J

l

_

l :_) . . I-

a

c

1,

~a

re

positive acceleration coefficients

tr,,1..-1

Al

\...,...

-li

~~

-1

~

'" -

i

1

\

.

··

{ )

f{ •'""'-' \ . "' <::

ryp\1.--..;'t.

l

· - .

/

' .

-

.

.

.:

a

r1 ·(t) r21

·

(t)

""'

U(O,

1)

iil'"

-\ ~ ." I '· -

I"'"

f• • I ' : J , 1

Q1-

~

"".'.

~

~

·

.. Y

C.

1. / \

·«_.

1

~

"'".· • v--

:.··-·~

.. ,. a note that a

rando~

.. number is sampled for each dimension

(J,._,;,~~u.,.

_

. .,

I \ _ l,'\.. .. l{ .... •:.-~""-#--'/ _ _ --· /

1 )

\ t

I

·

-

.

'<__,1

Vl,\

-J

~\

1

9

'V n .ll·,1 ,.de \\., \,t es:·

I ) ct- \~~ is~t.1- C::i't'\.l.-( -~ VjO (,.~ •

(d) Wheel · 1 l

!

~

;

.

~1i

#j

~~

~

!

r

~r.~r

;

~ 'I\ 0 0 ~ \(t, ... ,.\,l .(.~ \\'IL~\· O'I\, .

• Y;(t) is the personal best

po~ion

calculated as (assuming

·---~

Create and initialize an nx-dimensional swarm, S;

repeat --~·

for each particle i

=

1, ... , S.n5 do

minimization)

{

y;( t) if f(x;(t

+

1))

2:

f{y;(t))

y;(t+

1)

=

X;(t+

1) if f(X;(t+ 1))

<

f(y;(t))

e y( t) is the global best position calculated as

r

,..\.

"-IS"V

y(t)

E {Yo(t), ... I Yns(t)}jf(y(t)) = min{f(Yo(t)), ... , f(Yns(t))}

3

~:0.-~ ~' fJV

',\"'-.

\ t .-..t •'"' or (removing memory of best positions) - \-4.,u..,\-.-v-- - ~£., S-£.--Q

...;\U. \ \ 1" " '

J.Ll

O"

y(t)

= min{f(xo(t)), ... , f(Xns(t))}

i?'f

"

•{L

"'"

t

where n5 is the number of particles in the swarm ~ '"'I ., I \ _ , r • ., , '\ , ~} •

l'

a '· ,I • \

"""'\-- ·u-· '\f', -·\A • \ V\,_,J Q_ __...} \ ( ' i~ 0 ~ l. ~_.. 'J '-.,--(? -'{!\I'\

I f

!

i~kl¥im~:\t'

t;

jY!ffi

if f(S.x;)

<

f(~-Y;) then \_ ( _ ~~l

l

I

S.y; = S.x;, )

end

if f(

s.y;~

<

f(~-Y)

then ;

j -

\p UI -\-·

I

S.y- S.y,,

end

end

for each particle i

=

1, ... , S.n5 do

I

update the velocity; °"! 9 \ ~ 'P ~, "'-1'1.

update the position; '

Md .·~

until stopping condition is true;

-4

.

~

-

1:~

~

,;..;-...

~(

!

tilli:m~;:~r.{~Wtm

6

(12)

Basic Foundations of Particle Swar

rr

9

pti

mization

local best (lbest) PSO P

• uses the ring social network

"'>--' ·,,.'.

. '

V;j(t

+ 1)

=

Vij(t)

+ C1 r1j(t)(Yij(t) - Xij(t)] + ~r2j(t)[?D(t)

- Xij(t)] •

Basic Foundat

i

,

i~

of Particle Swarm Optimization

lbest PSO Algorithm

Create and initialize an nx-dimensional swarm, S; repeat

for each particle

i

=

1, ... , S.n5 do

~\v ~

.._...,\ \[iL'>

• Y;

is the neighborhood best, defined as

if f(S.x;)

<

f(S.y;) then

I

S.y; = S.x;;

y;(t+ 1) E

{.Nilf(y;(t+

1))

=

min{f(x)}, Vx E .Ni} r;~..,.

with the neighborhood defined as

.Ni

=

{Yi-nN. I (t), Yi-nN-+1 I

(t), · · ·,

Yi-1 (

t),

Y;(t), Y;+1 (

t), · · ·,

Yi+nN. I

(t)}

where nN; is the neighborhood size

• neighborhoods based on particle indices, not spatial information

o neighborhoods overlap to facilitate information exchange

• previous velocity, v;(t) \-.,__

• • 1'- OY"'W """"

• inertia component ""'.) II"

• memory of previous flight direction

• prevents particle from drastically changing direction

• cognitive component, c1 r1 (Y; - x;)

• quantifies performance relative to past performances o memory of previous best position

• nostalgia

• social component, ~r2(y; - x;)

• quantifies performance relative to neighbors • envy

.i. ~~l~'21:t~l! 'l~, Hf!'Jlfll 1 rnAm.mi1 ~~ nur.i!

I

~ ·~'°"l ~~·••r" ' •.u •• 1tt•1w•""'Oil ·~

~!!!~..iJ-'..~·

end

~ ~

for each particle i with particle i in its neighborhood do

if f(S.y;)

<

f(S.y7) then

I

S.y7

=

S.y;;

end end end

for each particle i

=

1, ... , S.n5 do

I

update the velocity and position;

end until sto,

"''

y(t)

.

x(t+ 1)

~ew v e l o c : : r · t y &oclal velocity

mertia velocity

><(I) cogniti"; velocity

y(t)

(a) Time Step t

X2

x,

l'l '1 ~c c;-v ....

h

.

)

fTI;-

..\ \r v_..H.

(" ' ,, ' t.... ,)

-~(t+l)

' µ, ..

.l. r (_

"

J.

x(t+l)

1g.altive velocity

xCtJ

y(t+ 1)

x,

(b) Time Step t

+

1

' 3Z':W;.;m•{:'/:11/!~i! 7

... ~!!.~.\~!.~.:.'.!!!.Ju.u,ll!!f,

(13)

• the problem: velocity quickly explodes to large values

a solution:

{

'~

+

1)

v

q

(t

+

1)

=

(

Sgrf{

~1j

)

Vmax

j

~~

if

l

vo

,

(t

+

1)

1

<

V

ni¥j

vl\0

ci·i

if

I

Vij(t + 1

)

!

f

Vm~

j

c)J·""f'"'

,

\

)

f/

--' \ 6).,\,, l,,. \~V

.

yL•'

'

'

"~

'

a

controlling the global exploration of particles

a

does not confine the positions, only the step sizes

• problem-d~endent

-• dynamically changing Vmax when gbestdoes not improve over T

l~

era

t

1

~

')

- I

( t~ V. ,(t+1)={ /3Vmaxj(t) iff(y(t))2:f(y(t-t')),Vt1 =1, ... ,T _

J

max.,/ Vmaxj(t) otherwise {

U'f-t

'

IA

'

s~

/3

decreases from 1.0 to 0.01

f,Jl

)

• exponentially decaying V max !I

Vmax,j(t+ 1) = (1 -(t/nt)"')Vmax,j(t)

JI

.:..o iiill'm1t:~t\Wtlfil

• Issues with velocity clamping:

• dimensions with ranges smaller than Vmax will never be clamped

• changes search direction - normalized clamping

- ; -

- - - -

rv\\l

Y

b ._q x,

J/t..,..f r l (_

I -'"'I',

fL

x,{l+l)

v,(t+l)

vcloc.ity update-~­

position update

-,

-

~

'-1

\

\ \ _.;-G i.-·;,l ~~. L-

\-"4·

;._..,r

~(t+l)\

...

\

,

.... ._., . I

v;(t+l)

V1(t+l) °Xc"(t)

x,

Change in Search Direction Due to Velocity Clamping

• to control exploration and exploitation

• controls the momentum • velocity update changes to

;l--\·t-\

ty_

J

,

{

Vij(t

+

1)

=

~ij(

t)

+

C1

f1j(t)[yij(t) - Xij(t))

+

~f2j(t)[}j( t) - Xij(

t))

• for

w

2:

1

• velocities increase over time • swarm diverges

• particles fail to change direction towards more promising regions

• for O

<

w

<

1

• particles decelerate, depending on c1 and ~

• exploration-exploitation (/ ,

o large values - favor exploration \

~

~\,.,

.!

.

-.,-L-• small values - promote exploitation

i

q~

c;, .),_.:; \-\ t.... /

I C)_,,_.,,. "- ~ S

"* ..

J \

• problem-dependent J

i

::wm:i': ~:'.1\w:i: 8

...,. !.~.".'.\!!.~.l.'.~! . .!.~ .. t.U.!.~.!.\~

r- r~-

t;

1\

Eogelbrech• 1U"iversily al Pretoria) Computauonal SwBrn1 lntell!gellC<> Swarm lntelllgence. 2018 31 1285 Ergelbrech' (Unive•sHy o• Prelaroa) Compulehonal Swarm lntelllgence Swarm Intelligence. 2016 32; 285

\ .r

(14)

Dynamically changing inertia weights

• w

,....,

N(0.72,

u)

~ rA,v.Sfi~·v--..

• linearciecreasmO---j

w(t)

=

(w(O) - w(nr)) (nt - t)

+

w(nr)

nr

• non-linear decreasing

w(t+ 1)

d:

~

(t'),

w(O)

=

1.4

• based on relative improvement .. --,,,

! emi{t) _ 1

w;(t

+

1)

= w(O)

+

(w(nr) - w(O))- (t)

.

em

r

+

1

where the relative improvement, m;, is estimated as

m;(t)

=

f(~;(t))

-

f(x

1(t))

f(y;(t))

+

f(x

;

(t))

!

mirfi~J.~\.llill'm

• if</;>

2:

4 and /<i, E [O, 1], then the swarm is guaran!~d_to converge

• XE

[0

,

1]

i:w-~

• /<i, controls exploration-exploitation

~

\ >,

0

/<i, R:: O: fast convergence, exploitation

~

.,, ... ,

/<i, R:: 1 : slow convergence, exploration c-..,.. ,,....,

• effectively equivalent to inertia weight for specific

x:

w

=

x

,

</>1

=

xc1 r1 and ¢2

=

xezr2

.i. '~1vu1:rc1: •a l"Qf94l•

~ ~~!IW.!N!~ill

,;,,..,,'. .. ,:~\ ~ O•• ,_I t i ) .

• to ensure convergence to a stable point without the need for

velocity clamping

-where

with

~;...

Vij(

t

+

1) =· Xt,Vij( t)

+

¢1 (Yij( t) - Xij( t))

'

<h(

Y

j

(t)

-

Xij{t))]

oL:._

L_\

2

::J!1

x

=

=12-_

--:-

<P---v'r.1>=;:=(

</>:==. -==;4:=<=:")

I

¢

=

¢1 +¢2

</;>1

=

C1 f1

¢2

=

Czf2

• Synchronous iteration strategy

~ ""'""""'"""°'"

. . ••<HOt:. ;• ftll•HI~ ~ U.!.IM.t.\J.!\l .r.~~.MJ.:'.~.t~

• personal best and neighborhood bests updated separately from position and velocity vectors

• slower feedback of new best positions

• Asynchronous iteration strategy

• new best positions updated after each particle position update • immediate feedback of new best positions

• lends itself well to parallel implementation

ct Which strategy is best to use is very problem dependent

i

u·,-..u1on11 I•• ririo~-~ 9

~ ~mv~i_,:~!.1~ .. m.m.(~

(15)

Basic Foundations of Particle Swar

pti

mization

Iteration Strategies (cont)

Synchronous Iteration Strategy Create and initialize the swarm; repeat

""

~

·

_

_

.

_

RG

~ ~

\ '""'

'

-,

~- .v

,

r

__.__

I bf.{_,/.._ ;:,.1~ 1 ~.\k.~ '-'

Asynchronous Iteration Strategy

~

Create and initialize the swarm;

k

:

)

repeat

f~~

~

~

for each particle do

for each particle do

Evaluate particle's fitness; Update particle's personal

best position; Update particle's

neighborhood best position;

Update the particle's velocity; Update the particle's position; Evaluate particle's fitness; Update the particle's personal

best position; end

for each particle do

I

Update particle's velocity; Update particle's position; end

Update the particle's

neighborhood best position; end

until stopping condition is true;

• low c1 and~:

• smooth particle trajectories

• high c1 and~:

• more acceleration, abrupt movements

.t. •

problem dependent

• adaptive acceleration coefficients

~llt. ~

e--

C1

(t)

= (c1

,

min - C1

,

max)-!-

"f\ ,

+

C1 max

1 -~ ' )

'TV

-

--t

*.e,'-'>

°""'

~

v Q). \ .

~(t)

=

(~.max - ~,min) n~ t .

+

~.min

' \

\..

d

i

'i::ouw"'u• ll'"••n•"'""'u--tnu

... !!'!!!"""''""-'

Basic Foundati

.

·of Particle Swarm Optimization

Acceleration Coefficients •

G

• C1=~=0?

• C1>0,~=0:

• particles are independent hill-climbers • local search by each particle

• cognitive-only PSO

• C1=0,~ > 0:

• swarm is one stochastic hill-climber • social-only PSO

\;u)

f;t1·

1

Ct

(

1-~ '(:\ ·. j <

\'

t"' •'·"' ~ ' ~

--~

I

• C1

=

~

>

0:

4'-• particles are attracted towards the average of y; and

y;

C \

s

·

rv,_._<l '

~ ~

>

C1: (' ~- \(;,."\ i..

1, (.\

• more beneficial for unimodal problems

• a,.<

ca:

• more beneficial for multimodal problems

~(~~~

1J

Q_,v,.\u.v.f~

• Terminate when a maximum number of iterations, or FEs, has been exceeded

• Terminate when an acceptable solution has been found, i.e. when (assuming minimization)

f(xi) ~

lf(x*)

-

t:I

-'JC'!CC<.-'CG\ ._)

e Terminate when no improvement is observed over a number of

iterations

.

.

\.

_.,/ ~-.t-t

~V..'v-€\ 'f~

\.)

..Q. ""'"'""" ·r '""'r.· l 0 ..!, u.;:1rr.n£:.l:.mm:;

d

• ..£

\A.(

\,; t .IL\

7

(16)

Basic Foundations of Particle SwarnAptimization

Stopping Conditions (cont) D

• Terminate when the normalized swarm radius is close to zero, i.e.

..

-\

,.

,.

Rmax ~ Q

Rnorm

=

diameter(S(O)) ..-.-;- r_.. .. • t .. '1

~"'

y

.f".

_

"

~ ~)- ..)

.;~ ..

.

'-'

where

Rmax =

llXm -Yll

,

m =

1

, .

..

,

ns

with

llXm-Yll

2: jjx;- yjj, Vi= 1, ...

,ns

• Terminate when the objective function slope is approximately zero, i.e.

t'(t)= f(y(t

))-

f(y(t

-

1)

)

,...,

f(y

(

t))

,...,

0

Simplified particle trajectories:

~_s\ ... J--

Yll.-~)Q '°> \,,~)."'-'-:',.__,v

• no stochastic component

• single, one-dimensional particle

• using

w

?

7

)-

,

.1.

t"L, .

• personal best and global best are fixed:

y

=

1.0,y

=

0.0

Example trajectories:

• Convergence to an equilibrium • Cyclic behavior - - \/')tA\\1./,\{.,

• Divergent behavior

.i. ~•llr'llU•~1I ·~~ nl1011£

~ ~;~::;i: .• \ jd..m.m1+

Convergence

Be.

1avior

of Particle Swarm Optimiza

About Convergence

~~ are auarameea una" onamo o convera"

o an eauum

• Particles will converge to

C1Y+ ~y

C1 +~

• This is not necessarily even a local minimum

• It has been proven that standard PSO is not a local minimizer Potential dangerous property:

• when x;

=

Y;

=

Y;

• then the velocity update depends only on WV;

• if this condition persists for a number of iterations,

WV;

---+

0

~/\

-

---··

--

---...

·.

\

-,

(~~'~'-)

-

...

4 ..;.----~---~-~--~-~--'

• • " ' . , . - be' •4'~ ·tt•11••

(a) Time domain (b) Phase space

...

w

=

0.5 and </>1

=

</>2

=

1.4

D

I

·-•UtH! f l ml"'"'"'"·•ll r;•'.'.'°'.'."

..., l~""'!':!.!.!.J"~U.~.•.1,1,

(17)

'./

h~-..

I

I ~ .

...

...

.

.

....

..,.__

__

.._

__

~--~--~---'

I ,. tel; 11111. 1111 ~

I .. ~Y.~·- I

•tt I t

- - - - ~ I • • b • *

(a) Time domain (b) Phase space

w

=

1.0 and ¢1

=

¢2

=

1.999

• What do we mean by the term convergence?

• Convergence map for values of wand ¢

=

¢1

+

¢2, where

</>1 = c1 r1, ¢2 = ~r2

k ~(IH:"

v ~· }

(/'·, ,..,.,. ' - '1

l I "

# ... ~

)

'

l ,.JI First convergence condition on

values of

w,

c1 and ~:

\

I I.

IL

' .

..::.'"'

\;-.'

.

'

..

...)-

,

,.

1

1

>

w

>

2(¢1

+

¢2) - 1 ~ 0

~

)t\;. '\

>i'

v--If;,:-"''!

"

Convergence Map for Values of

w

and

<P = ¢1

+

¢2

I

., ~~r10;-t\>J

.

....

,., ,., . ,

..

n

.

nw" ,..

..

~-~~'Ll31!.f

.,,..,,.__

__

~---'---~---'

& ~ I~ 1~

-(a) Time domain

-

t

,'

...., I

J

-,·

\., I::'• ) I _, )

)

...

~~~~-~-~-~-... -~~

... ...,,~.a z..tt -w . . " ... " (b) Phase space

w

=

0.7 and ¢1 = ¢2

=

1.9

w = 1.0 and c1 = ~ = 2.0

...

...---.---~---,----~---.

""

...

200

i

,~tr~#/ik~NNM~~W~

..,,.

....

...,

..,.L._ _ _ ..._ _ _ _ .._ _ _ ~---~---'

• ~ ~ ~ 200 s

• violates the convergence condition

• for

w

= 1.0, c1

+

~

<

4.0

to validate the condition

(18)

Convergence Behavior of Particle

Sw

_

m

Optimiza

Stochastic Trajectories (cont)

w

=

0.9 and c1

=

C:!

=

2.0

~r---~~----.---~---~---~

• violates the convergence condition

RG

"' • for

w

=

0.9,

c1 + C:2

<

3.8

to validate the condition

a

"

-30"-~~~ _ _ _ _ . _ _ _ _._~~~--'-~~-~

• ~ ® ~ ~ ~

What is happening here?

• since O

<

</>1

+

</>2

<

4,

~ and rh r2 ""' U(O, 1),

• prob(

C1 + C:2

<

3.8) =

348

=

0.95

A more recent and also the most accurate convergence condition: .>IQ

24(1

-

w

2 ) ,

c1+C:2<-.

~--· forwE[-1,1f

0,5 r--t--+-+--+--f---i--l-_:~--1

~ o I I I :A"--+--i

~.s I i i I ;::> f. I I I +

-0 O,S I l.5 2 25 3 35 .d C.t+~

) ~

-t,.~u.

(,'ft.~

t

v ... .,.

.)~--'~.

~CJ'"'- \

f'

'

Xl.

"

~ ~ '\'

/1

Empirically shown to be the best matching condition

.z

Jl """""t~!"'"".·,~ ~~~"" iU1

Convergence

B

.•

avior

of Particle Swarm Optimiza

Stochastic Trajectories (cont)

..(_

w

=

0.7 and c1

=

C:!

=

1.4

IO,---.-~---.---~--~~~~~

~

·

t

·

H,.

~

-~,-· ,,,,,,_,.. _ _,,.\fl''-'v" .f\ .. ,Jr" \ \

• validates the convergence condition

~

-10

_,,.._

__

~'----'---'---'---'

~ ~ ~

=

s

I '

(

cl>::P

1$ll~J . ~..,·.___f

([, ,'\,.._ -k, i

A rough classification of PSO algorithms: • Social-based algorithms

• Different socal network structures

• Different information sharing stratee:_s

-1

l"!-A

• Hybrid algorithms

~

o-\-t-v"

c

,L.J-• Sub-swarm-based algorithms

,.t:J--c..

• Memetic algorithms

-=7

'--

<; · / v--t

~

·

~i-"A

• Multi-start algorithms_:;. \#g '-\ n

' <'

• Heterogeneous algorithms -

·

> ,.,,

'

11 _, • i

.::+ -

~ ~

J. .

..t-F

~' tl.,.._vi "'·v-.S

• Self-adaptive algorithms -.:i

U

-oylr

f"~

·

v-s

e>J.

.,,~

!

i:::wm::~~tl:ml3

(19)

Particle Swarm Optimization Algorit

Social-Based Algortihms

Social-based algorithms are algorithms that use different

• neighborhood topologies, and/or

• information sharing approaches

Example algorithms

• gbest PSO versus lbest PSO versus Von Neumann PSO

• spatial neighborhoods PSO

• fitness-based spatial neighborhoods PSO

• growing neighborhoods PSO

• hypercube neighborhood structure PSO

• fully informed PSO

• barebones PSO

C,

\c.

r

0

Q.

"'

,~-.._-· f':" '- .J

....:· 1' ..,,,

r-} ....:.)· . ..r--""'

...,t

'CIRG

• Velocity equation is changed such that each particle is influenced by the successes of all its neighbors, and not on the performance of only one individual

• Each particle in the neighborhood,

N;,

of particle i is regarded

equally:

v;(t

+

1)

=

X

(v;(t)

+I:

r(t)(Ym(t

{

-

,;;(t»~

m=1 nNi\...-

~-

}

~

where n.A:; =

IMI.

and r{t) rv

U(O,

C1

+

~)nx

, C, JV

,1 ., ""'-::

<) ..,,. ..

~ ~;~wmx ~r~wruH

Particle Swarm

-.;;;; timization

Algorithms

Social-Based Algortihms: Standard PSO •

a Velocity update:

Vij( t + 1) ='°'\tij( t) +

C1 f1j(

t)[Yij( t) - Xij(

t)]

+

~f2j(

t)(Yij(

t) -

Xij(

t)]

where

Y;

is the neighborhood best

• Position update

X;(t + 1) = X;(t) + V;(t + 1)

• A weight is assigned to the contribution of each particle based on.

the performance of that particle: . . • r;

~

(

-.;;:--nN;

(

c/>m

Pm

'.IJ\

·

~

0 ,

L..,,m=1 f(Xm i)}t}")' Otl

V;(t+1)=x V;{t)+

nN; ( _m )

Lm=1

f(x~(t))

~

where

<Pm "'

U(O

~) and

' n.N; ' I .,,

·' ·\-U• (1· ""·

G

1

A.

y (

t) A : , _ e,_,, \ \,·' J

Pm(f)

=

'f-'1 m A.

+

¢2Ym(t)

l"'v--. j \ } l ( . t

/'i\

\,-,

W-·

'f-'1

+

¢2

~

)

.

-.

-·-··"':

;

, "'

"'

\.

...

..\--

.w~

r

....

+<l..q

o~v--L.

v--t

~

.,,~~ -

I

~L\'"·'\.t~"-

.-)

\"·i1~v

1 r->

!

m?:'ijl.:;; \;'.;'\\ii.iii 14

Engelbrecn; (Ur1ve•sity of Pre:oria) Computational Swarm Intelligence Swarm lntelllgence. 2016 5S 1285 Eogelb'ech: (University oi ~reloria, Coonputat1onal SWarm lnteUlgonce Swarm Intelligence, 2016 56; 285

CftC

e

~s

(20)

\r.Qv~1u'-• Formal proofs have shown that each particle converges to a point that is a weighted average between the personal best and

\ , neighborhood best positions:

Yij(t)

+

Yij( t) 2

0

l~ ov-.~

-

'

,_\;'-'

)

~

o

This behavior supports Kennedy's proposal to replace the entire

velocity by

Vij(f

+

1)

r v N

(Y

ij

(f)

+

Yij(f) )

2 ,a

where

a= IYij(t) - Yij(t)I

.Q.. ~•111Us•tl~ ~~"' ,,t?UI~

. . ¥11lH,lf10>r lCICl'A

~ ~-~.'!.'.~.~.~-~Y.!'.\.J~.X.!.!.~!!.~.'.~

Hybrid algorithms combine PSO with components of algorithms from other paradigms such as evolutionary algorithms:

• Selection-based PSO ~'l>_,,£1

-

e_y:plo'}

• Reproduction in PSO

~

CLO

o Mutation in PSO --""'

e~-v

1,

o

r

Q. .

Differential evolution based PSO

~ ~;iw:m::~ri;W;m

• The position update is simply

Xij(

t

+

1)

=

V;j(

t

+

1)

• -Alternative formulation:

{

Yij(t) if

U(O

,

1)

<

0.5 vij( t

+

1) = N( Y;1(t);Y1j(t),

a)

otherwise

• There is a 50% chance that the j-th dimension of the particle dimension changes to the corresponding personal best position

Calculate the fitness of all particles;

for each particle i

=

1 , ... , n5 do

Randomly select

n

1s particles;

~ "'\"""!'""'"'""

~ R.::t:nJ!~.~.J~HU:h

Score the performance of particle

i

against the n1s randomly

~ ,,.;;,, .... ..l.;v-1<:'

c

c..rr

.Z

selected particles; end

""

~~ t_ <;t.. v-€--- \ )lT (

Sort the swarm based on performance scores; /) "'-

<

.

Replace the worst half of the swarm with the top half, without changing

~ s-£"-~c..Jr.-

1..

,

the personal best position!;; ----2) vt'~ ... l•\Ao """) ~

e

~

:r-

,,. -

\

-

\}1 'i.. <;(cl

ve

I ' ~ (

• What are the problems with this? ' ,

lo

v

J,_ \ ) •

• Any advantages?

1,)"_

1

Jv':>.·L ,

1 _ '". :

'!J'.',1l ?~ ~\A_ 7\.P"'\..o· ~yu;o11

\..Lf.'., \ ' '

.Q r:'~W!ll" "' """" 15

(21)

Using arithmetic crossover:

~

l \

"I ~ -t'w\~ c."."·'l '.,,')

. ,_..,) .... \) z-

f

'

'

.;. ,

r ,.,, •

\Ji

,"'

.

y}x ,

, ; ) I 1' ;

;Q.

'6

. ·{ _,~ J.'

t.:-> • \

1... '

t'"'"

·'ii'\ \

\

' --".~

;)L-,

\

\

• An arithmetic crossover operator to produce offspring from two randomly selected particles:

X;1

(t

+

1)

X;2(t

+

1)

r( t)x;, (

t)

+

(1 - r(t))x;

2 ( t)

=

r( t)xi-z (

t)

+

(1 -

r( t))x;, (

t)

with the corresponding velocities,

V;1

(t

+

1)

V;2

(t+ 1)

where r1 (t) ,....,

U(O,

1

)nx

• Disadvantage:

v;,

(t)

+

Vi-z(t) llv;, (t)ll

(t)

+

v12(t)ll

V;,

(t)

+

Vi-z(t)

·

llV;

2

(t)ll

!

~~.mW1!.i~

[filiifii

• Parent particles are replaced even if the offspring is worse off in fitness

• If f(x11(t+ 1)) > f(x1,(t)) (assuming a minimization problem),

replacement of the personal best with x;, (t

+

1) loses important

information about previous personal best positions • Solutions:

• Replace parent with its offspring only if fitness of offspring is better than that of the parent.

i. ~:~,w~I :!·.~~-::;:

~ ~te:i!'m" n m ucf

• Personal best position of an offspring is initialized to its current position:

Y;,

(t

+

1)

= X;,

(t

+

1)

• Particles are selected for breeding at a user-specified breeding probability

• Random selection of parents prevents the best particles from dominating the breeding process

• Breeding process is done for each iteration after the velocity and position updates have been done

.(U-lt

• Mutate the global best position: ) ,"c

y(t

+

1)

= y'

(t

+

1)

+

r,'N(O, 1)

!

miitr.

r;:it~i.~Wiiti

where

y' (

t

+

1)

represents the unmutated global best position,

and

r/

is referred to as a learning parameter

• Mutate the componets of position vectors: For each j, if

U(O,

1)

<

Pm,

then component xij( t

+

1) is mutated using

xij(

t

+

1)

= xij(t

+

1)

+

N(O,

u )xij(t

+

1)

where

f!

= 0.1

(Xmax,j - Xmin,j)

-+·

t; t

,.. § '-• ..,,,.

~J,f

_'

~

~

A i

I \-•\CJ\

I \

8V

;_

).

~

()· {i_'.f\·~1 .i: ;~ -.r~

(22)

'--"

\

"

-f\"v

'

\:-J(i :~y

• Each Xij can have its own deviation:

with

t

'

.,

'-.. ..._

~ \.'• 'f '

v

"\

)>"".;v

'()'Y. " ..

r6'

..

q_,"', .;:

-- ."':. -~ -..,.

I,

CTij(t)

=

CTij(f - 1)

e/

N(0,1)+rNj(0,1)

,

1

T

-.;2...;n;

1

T

-.ffrl;

\

..

~. t ,~

l

I

~\..'"'

I

~

1?

L

)

\ ;71'

\ t ,,,·

r"

• Competition introduced to balance exploration-exploitation • Uses a second swarm of predator particles:

_\- • Prey particles scatter _(~xplore) by being repelled by ttie presence of

..1 , predator particles · ·· -· - · --·-

-,;(,,, • Silva et al. use only one predator to pursue the global best prey

1

-.)t:r'

particle

' • The velocity update for the predator particle is defined as

ij/

J

'

\

'

.

,_~

>

(

\ .,.. (1

''.0-_

\

Vp(t

+

1) = r(y(t) - Xp(t))

where v P and Xp are respectively the velocity and position vectors of the predator particle, p

r""' U(O, Vmax,pr

• Vmax,p controls the speed at which the predator catches the best prey

~:·.~\.·:~~w ~~·w,v::::

,.;;.. -~-~-~.'.'!.!.~.~!.~.! .. H .. \'.U.!!.t'.~

Sub-swarm-based algorithms utilize multiple swarms and/or sub-swarms:

• Cooperative Split PSO (see slide 21 O)

• Hybrid Cooperative Split PSO

(I Predator-Prey PSO

7

• Life-cycle PSO

• Attractive and Repulsive PSO

• Vector-evaluated PSO (see slide 233)

• NichePSO (see slide 220)

• Multi-swarm PSO

• The prey particles update their velocity using

I

Vij(t

+

1)

=

WVij(t)

+

C1 f1j(

t)(yij(t) - Xij( t))

+

~f2j(t)(Yj(t) - Xij(t))

+

esrsj(t)D(d)

dis Euclidean distance between,..prey,

i,

and the predator

r3j(tj-;.,_, U(O,

1),

and D(d)

=r:

qe-~

• D( d) quantifies the influence -that the predator has on the prey,

growing exponentially with proximity

e Position update of prey particles: If U(O, 1) <(f,!then the prey

velocity update above is used, otherwise the normal velocity update is used

.Q. ·-""'" ·~"""" 17 ~ tm_1fl!:'.~!'1.w,~m~

(23)

Particle Swarm Optimization Algorith&

·

.

Sub-swarm-Based Algortihms: Life-Cycle PSO

RG

'

Initialize a population of individuals; repeat

for all individuals do

Evaluate fitness;

for all GA individuals do

Perform reproduction; Mutate;

Particle Swarm

t

i

mization Algorithms

Memetic Algortihms '

if fitness did not improve

then

Select new population;

end

Memetic PSO algorithms incorporate some form of local search:

,\,/'

\

\'

~

.

J->

--\-

.

-(

..

}

I

Switch to next phase;

end

for a// Hill-climbers do • Guaranteed Convergence PSO

• Vv

\

.. '~ '··'

'-Find possible new neighboring solution;

• PSO with Gradient Descent end

for all PSO particles do

Calculate new velocity vectors;

Update positions;

end

until stopping condition is true;

Evaluate fitness of new solution;

Move to new solution with specified probability; end

.r.JJ

<r

v

• Forces the global best position to change whon

X;

=

Y;

=

y

11" ~

sr

~ !¥

~,. The scaling factor is defined as

a The global best particle is forced to search in a bounding box

,,

-o .,..

around the current position for a better position ... ~

j

~

The position update of the global best particle changes to . ..flf> /

" " ' '

,.,

· _.'t>

{

2p( t) if #successes( t)

>

Es

p(t

+

1) = 0.5p( t) if #failures(t)

>

Ef

p(

t) otherwise

X-rj(t

+

1) =

YJ(t)

+

WV-rj(t)

+

p(t)(1 -

2r2(t))

--t-y

where r is the index of the global best particle

where

#successes

and #failures respectively denote the number

of consecutive successes and failures

The velocity update of the global best particle then has to change to

V-rj(t

+

1)

=

-X-rj(t)

+

Yj(t)

+

WV-rj(t)

+

p(t)(1 -

2r2

j(t))

where where

p(

t) is a scaling factor

!.

~;~Wf.iU.;'.!

-{~

Wei~

• A

failure is defined as

f(y(t)) :::;

f(y(t

+

1))

• Proven to converge to at least a local minimum

~ "'"""'"'"•""1"18

.!, l.~.~1~W!'.;.!,t,U~~;.

(24)

Particle Swarm Optimization Algoritrtfs

Particle Swarm optimization Algorithms

Memetic Algorlihms: Gradient-based PSO " Multi-Start Algortihrns

R.G

Each particle computes two positions in the same direction:

I

X;(t

+

1)

=

X;(t)

+

V;(

t

+

1) \

(~

(l :~t ~ .. ;· JJ'

' cl

_, ,

__

"

x7(t+1)=x;(t)+,Bv;(t+1)

( 1)

(2)

• Major problems with the basic PSO is lack of diversity when particles start to converge to the same point

• Multi-start methods have as their main objective to increase diversity, whereby larger parts of the search space are explored

\ I

;

~

,...

(~

t "'\ ...

f \

IV I '

I I .I

> ...

/

I

.

~,)

I

:!

'

-~

---~ ~ ~

tV -...\-'\.

/?

\;).U.' 0-~

. /

where

f3

E (0, 1 ). ·

The position in each dimension is then determined as • This is done by continually inject randomness, or chaos, into the

swarm

xq(t+1) = arg max {

6X

t

+

1)

~

- f(xq(t)),

r

f

)(

~

C

t

+

1

n -

rcxijun}

• Note that continual injection of random positions will cause the swarm never to reach an equilibrium state

-

/3

·7

. ~

3

) ~· ~ • Methods should reduce chaos over time to ensure convergence

. /1"

('y

\ ·,· •• • .,.1

/ I r ....

' )~·

Alternative: gbest position moves in the negative gradient

.i. ,.,, ... ,,,., ... ~.

~ ~;.mwN1 ·.; ~w.m

• Craziness is the process of randomly initializing some particles • What are the issues in reinitialization?

• What should be randomized? • When should randomization occur? • How should it be done?

• Which members of the swarm will be affected?

• What should be done with personal best positions of affected particles?

'

:::w:;:w :!',:w:r-1

...,.. !!!!.'.!!.~.~~~.WL.Ll.'.U..U'

!

~.:wlti.m~

};\wfm;

'••WJ:

·,. 1.?oH

Particles in standard particle swarm optimization (PSO), and most of its modifications, follow the same behavior:

• particles implement the same velocity and position update rules • particles therefore exhibit the same search behaviours

• the same exploration and explotation abilities are achieved

Heterogeneous swarms contain particles that follow different behaviors:

• particles follow different velocity and position update rules

• some particles may explore longer than others, while some may exploit earlier than others

• a better balance of exploration and exploitation can be achieved provided a pool of different behaviors is used

~ ·~"0ll"'V'U"h>19

~ l~!·,~~~tm;~

(25)

{

AyJ '

r,

._

,I.·

,,,!,~

""kV- . J

\ ' I

Swarm of ParTicles -_ff"' NJ-.- .

fY

P7 ·/ : ,'!v-'"'

~

,-). ~ ..,...r,,-{)..

l,#o ... , \

:7> ./ .,

f)

.

/

>;re

' r

.

'--

"" -

¢''\,S

'\/

--i::=·' ""''""'

Behavior Selection Strategy

,,,..---SI. l.ecr. R"h~vinr Uµlfat"s Bd1aviuc

r:

b

::) (/'

~~

\_

b?so

Pso)

..____.. ' - - - '

, / Performance Information

, /

(~0

..._...

0~~

...___....

~~~~

~

B.ehnvior Pool

Exploration-exploitation finger print determined through diversity profile

Diversity calculated as

1 ns

V

=

nL

s i=1

where the swarm center is

and n5 is the number of particles

nx

L(Xij- Xj)2

j=1

I

imw:ir.

:;-~m:~

... !...!!.

A behavior is defined as both

• the velocity update, and

~ the position update

of a particle

Requirements for behaviors in the behavior pool:

• must exhibit different search behaviors

• different exploration-exploitation phases () ( 11

• that is, different exploration-exploitation finger prints ;

1M~~---~-..---~-~--r---~

160 L'

140 [\ '. \ \

120 \ I

\ \

100 \ 1i

"'

'.\

\\ ·. ~

',\

d>e"PSO

Coii!nti-.te PSO -

-Social PSO 83rebones PSO

-Mocfied Barebones PSO

\.~.,,

50

40

0 .

0

\ '

....

.

~~~

--100 200 300 400 soo soo :oa eoo 900 1000

tteration

(a) Ackley

700,----,- - - , - -- - . . ---,.----,- ... - - r -- r

-sool:t--1"·~ ... .

500 "\,

..

..

, ... -~ ...

g..,1PCO

CognitivePSO

----SocialPSO

Barebooe. PSO

-f.tadifi"ild Barebon~ PSO

·--

·

-

...

--

...

<DO r-·----~---­

300

200

100

I

~

,,

! 0 1).

0 100

...

...

-~

--

·'-

·

--.. ...

·

--

.

200 300 400 500 6')() 700 800 900 1000 ltera!ill1

(b) Quadric

i

::·,;·mo\' ::ws-::;:20

...., !.~.~~J.1.\!.!!.!,.t~ .. !.~!.U~.\fl

Figure

Updating...

References

Updating...