• No results found

Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic Algorithms

N/A
N/A
Protected

Academic year: 2020

Share "Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic Algorithms"

Copied!
68
0
0

Loading.... (view fulltext now)

Full text

(1)

Lecture 05

Local Search

Tooba Mehtab

(2)

Outline

Hill-Climbing Search

Simulated Annealing Search

Local Beam Search

(3)

Local Search Algorithms keep a

single "current"

state, and move to neighboring states in order to

try

improve

it.

Solution

path

needs not be maintained.

Hence, the search is “

local

”.

Local search

suitable

for problems in which path

is not important; the goal state itself is the

solution.

It is an

optimization

search

(4)

Classical search Local Search

• systematic exploration of search space.

• Keeps one or more paths in memory.

• Records which

alternatives have been explored at each point along the path.

• The path to the goal is a solution to the problem.

• In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution.

• State space = set of "complete" configurations.

• Find configuration satisfying constraints, Find best state according to some objective

function h(s). e.g., n-queens, h(s)= number of attacking queens. In such cases, we can use Local Search Algorithms.

(5)

Put

n

queens on an

n × n

board

with no two queens on the same

row, column, or diagonal.

In the 8-queens problem, what

matters is the final configuration

of queens, not the order in

which they are added.

(6)

Key idea:

1. Select (random) initial state (generate an initial guess).

2. Make local modification to improve current state

(evaluate current state and move to other states).

3. Repeat Step 2 until goal state found (or out of time).

(7)

Advantages Drawback

• Use very little memory – usually a constant amount.

• Can often find reasonable solutions in large or infinite state spaces (e.g.,

continuous). For which systematic search is

unsuitable.

• Local Search can get stuck in local maxima and not find the optimal solution.

(8)

State-Space Landscape

• A state space landscape: is a graph of states associated with their costs.

• State-space landscape

– Location (defined by state)

– Elevation (defined by the value of the heuristic cost function or objective function)

– If elevation = cost, aim to find the lowest valley (a global minimum)

– If elevation = objective function, find the highest peak (a global maximum)

A complete local search algorithm always find a goal if one exists

(9)

Global optimum

A solution which is better than all other

solutions

Or no worse than any other solution

Local optimum

A solution which is better than nearby

solutions

A local optimum is not necessarily a global

one

(10)

A local max/min is over a small area.

For instance, if a point is lower than the next nearest

point on the left & right than it's a local min.

There can be many local maxs and mins over an

entire graph.

A global max/min is the highest/lowest point on

the entire graph.

There can only be ONE gobal max and/or min on a

graph and there may not be one at all.

(11)
(12)
(13)

Main Idea: Keep a single current node and move to a neighboring state to improve it.

• Uses a loop that continuously moves in the direction of increasing value (uphill):

– Choose the best successor, choose randomly if there is more than one.

– Terminate when a peak reached where no neighbor has a higher value.

• It also called greedy local search, steepest ascent/descent.

(14)

“Like climbing Everest in thick fog with amnesia”

Only record the state and its evaluation instead of

maintaining a search tree

function HILL-CLIMBING( problem) returns a state that is a local maximum inputs: problem, a problem

local variables: current, a node

neighbor, a node

current ← MAKE-NODE(INITIAL-STATE[ problem]) loop do

neighbor ← a highest-valued successor of current

if VALUE[neighbor] ≤ VALUE[current] then return STATE[current] currentneighbor

(15)

attempts to find a better solution by incrementally changing a single element of the solution

Current state

Plateaux Plateaux

it is possible to make progress

(16)

Local maxima: a local maximum is a peak that is higher than each of its neighboring states, but lower than the global maximum. Hill-climbing algorithms that reach the vicinity of a local maximum will be drawn upwards towards the peak, but will then be stuck with nowhere else to go.

Plateaux: a plateau is an area of the state space landscape where the evaluation function is flat. It can be a flat local maximum, from which no uphill exit exists, or a shoulder, from which it is possible to make progress.

Ridges: Ridges result in a sequence of local

maxima that is very difficult for greedy algorithms to navigate. (the search direction is not towards the top but towards the side)

(17)

S

A B

C D E F G

H I J K

N O

L M

9

7.5 8.5 8

6 5 2 0 11 9 9 7 6 4 4

our aim is to find a path from S to M

associate

heuristics with every node, that is the straight line distance from the path terminating city to the goal city

(18)

S

A B

C D E F G

H I J K

N O

L M

9

7.5 8.5 8

6 5 2 0 11 9 9 7 6 4 4

(19)

S

A B

C D E F G

H I J K

N O

L M

9

7.5 8.5 8

6 5 2 0 11 9 9 7 6 4 4

(20)

S

A B

C D E F G

H I J K

N O

L M

9

7.5 8.5 8

6 5 2 0 11 9 9 7 6 4 4

(21)

S

A B

C D E F G

H I J K

N O

L M

9

7.5 8.5 8

6 5 2 0 11 9 9 7 6 4 4

(22)

A

B F

D C E G

H I K 10 2 0 7 5 3 6 0 J K 4 8 0 10 From A find a solution where

H and K are final states

(23)

A

B F

D C E G

H I K 10 2 0 7 5 3 6 0 J K 4 8 0 10

(24)

A

B F

D C E G

H I K 10 2 0 7 5 3 6 0 J K 4 8 0 10

G is local minimum

Hill climbing is sometimes called greedy local search because it grabs a good neighbor state without thinking ahead about where to go next.

(25)
(26)

Alternative hill climbing

• Stochastic hill climbing

– chooses at random from among the uphill moves (neighbors); the probability of selection can vary with the steepness of the uphill move.

– This usually converges more slowly than steepest ascent, but in some state landscapes it finds better solutions.

• First-choice hill climbing

– implements stochastic hill climbing by generating successors randomly until one is generated that is better than the current state.

– This is a good strategy when a state has many (e.g., thousands) of successors.

• Random-restart hill climbing

– adopts the well known adage(proverb), "If at first you don't succeed, try, try again." It conducts a series of hill-climbing searches from randomly generated initial state, stopping when a goal is found.

– It is complete with probability approaching 1, for the trivial reason that it will

(27)
(28)

Annealing

What does the term annealing mean?

(29)

Main Idea: escape local maxima by allowing some "bad" moves but

gradually decrease their frequency.

• Select a neighbor at random.

• If better than current state go there.

• Otherwise, go there with some probability.

• Probability goes down with time (similar to temperature cooling)

(30)

Simulated annealing search

Initialize

current

to starting state

For i = 1 to ∞

If T(i) = 0 return

current

Let

next

= random successor of

current

Let Δ = value(

next

) – value(

current

)

If Δ > 0 then let

current

=

next

(31)

Simulated annealing search

One can prove: If temperature decreases slowly

enough, then simulated annealing search will find a

global optimum with probability approaching one

However:

– This usually takes impractically long

– The more downhill steps you need to escape a local

(32)
(33)

Main Idea: Keep track of k states rather than just one. Start with k randomly generated states.

At each iteration, all the successors of all k states are generated.

If any one is a goal state, stop; else select the k best successors from the complete list and repeat.

Drawback: the k states tend to regroup very quickly in the same region lack of diversity.

Is this the same as running k greedy searches in parallel?

Local beam search

(34)

Local beam search

• Instead of keeping only one node in memory

– Keep track of k states

– Start with k randomly generated states

– For each step, generate the successors of all k states

– If anyone is a goal, the algorithm halts

(35)

Local beam search

• Look like instead of doing it in parallel, rather than sequential, but

– In random-search, search independently of others

– In local beam search, useful info is passed among the parallel search threads

• Quickly abandon unfruitful searches, and move to the resourceful ones • Drawbacks

– Lack of diversity among k states

– Can quickly become concentrated in a small region of search space (more expensive version of hill climbing)

• Alternative: Stochastic beam search

– Instead of choosing best k from the successors, choose k successors at random (with probability of choosing a given successor being an

increasing function of its value)

(36)
(37)

• Formally introduced in the US in the 70s by John Holland.

• GAs emulate ideas from genetics and natural selection and can search potentially large spaces.

• Before we can apply Genetic Algorithm to a problem, we need to answer:

- How is an individual represented? - What is the fitness function?

- How are individuals selected? - How do individuals reproduce?

(38)

• Each state or individual is represented as a string over a finite alphabet. It is also called chromosome which Contains

genes.

1001011111

Solution: 607

Encoding

Chromosome:

Binary String

genes

(39)

Each state is rated by the evaluation function called

fitness function

. Fitness function should return

higher values for better states:

Fitness(X) should be greater than Fitness(Y) !! [Fitness(x) = 1/Cost(x)]

Cost

States

X Y

(40)

Roulette Wheel Selection

• Sum the fitnesses of all the population members, TF

• Generate a random number,

m, between 0 and TF

• Return the first population member whose fitness added to the preceding population members is greater than or equal to m

(41)

How are individuals selected ?

Roulette Wheel Selection

1 2 3 1 3 5 1 2

0 18

2

1 3 4 5 6 7 8

Rnd[0..18] = 7

Chromosome4

Rnd[0..18] = 12

Chromosome6

(42)

How do individuals reproduce ?

(43)

1010000000

1001011111

Crossover single point -

random

101

1011111

100

0000000

Parent1 Parent2 Offspring1 Offspring2

With some high probability (crossover rate) apply crossover to the parents. (typical

values are 0.8 to 0.95)

Genetic Algorithms

(44)

101

1011111

101

0000000

Offspring1

Offspring2

101

10

0

1111

10

0

0000000

Offspring1

Offspring2

With some small probability (the

mutation rate

) flip

each bit in the offspring (

typical values between 0.1

and 0.001

)

mutate

Original offspring Mutated offspring

(45)

Genetic Algorithm

Example

(46)
(47)
(48)
(49)
(50)
(51)
(52)
(53)

Algorithm:

1.

Initialize population with p Individuals at

random

2. For each Individual h compute its fitness

3. While max fitness < threshold do

Create a new generation Ps

4. Return the Individual with highest fitness

(54)

Genetic algorithms

(55)

■ Fitness function: # of non-attacking pair of queens

(min = 0, max = 8×7/2 = 28)

■ Probability for selected for rep

❑ 24/(24+23+20+11) = 31%

❑ 23/(24+23+20+11) = 29%, etc

Initial Population Fitness Fn Selection Crossover Mutation

3 2 7 5 2 4 1 1 2 4 7 4 8 5 5 2 3 2 7 4 8 5 5 2

(56)

production of next generation

Fitness function: number of non-attacking

pairs of queens (min = 0, max = 8 × 7/2 = 28 the higher the better)

24/(24+23+20+11) = 31% 23/(24+23+20+11) = 29% etc

probability of a given pair selection proportional to the fitness (b)

crossover point randomly generated

random mutation

Operate on state representation.

Genetic

(57)

• GA is superb if:

– Your space is loaded with lots of weird bumps and local

minima.

• GA tends to spread out and test a larger subset of your space than many other types of

learning/optimization algorithms.

You don’t quite understand the underlying process of

your problem space.

– You have lots of processors

• GA’s parallelize very easily!

(58)

Evolvable Circuits

(59)

Antenna for NASA

(60)

Car Design

(61)

Evolutionary Arts

61

(62)

Evolving Mona Lisa

(63)

Local search (Iterative improvement) algorithms

keep only a single state in memory.

Can get stuck in local extrema(maxima/minima);

simulated annealing provides a way to escape local

extrema, and is complete and optimal given a slow

enough cooling schedule.

Simulated annealing, local search – are heuristics

that usually produce sub-optimal solutions since

they may terminate at local optimal solution.

(64)

Local beam search keeps track of k states, rather

than one state. Quickly abandon useless path, but

suffer from lack of diversity.

Stochastic beam search chooses random k

successors that has high probabilities

Genetic Algorithm uses crossover from parents that

have high fitness function

(65)

1. Explain the local search strategy. 2. Define the following terminologies

a) Global Optima b) Local Optima

c) Ridge d) Plateaux e) Cross over

f) Mutation g) Selection h) Population

i) Gene

j) Fitness function

(66)

3. Explain the idea behind these local search strategies: a) Hill Climbing

b) Simulated Annealing c) Genetic Algorithm

4. Differentiate between the following:

a) Stochastic hill climbing vs. Random restart b) K-beam algorithm vs. Genetic algorithm

c) Cross over vs. mutation

(67)

TUTORIAL 05

In the 8 queens, the fitness of a board arrangement can be considered as the number of clashes that take place

between the queens. So the measure of

fitness of any individual (chessboard

arrangement) is attributed to number

of clashes amongst attacking positions of queens.

Let clashes=number of clashing queens

fitness = 28 − clashes

Find the fitness of the board arrangement given in Figure 3.

Q

Q Q

Q Q Q

[image:67.720.454.691.120.359.2]

Q Q

(68)

Figure

Figure  1: 8 Queens board

References

Related documents

In our Multi Level Monte Carlo method with Control Variate (MLCV) the choice of a suitable regularized version of the input random field is crucial: a highly smoothed problem will

Alireza S, Afshin Z, Seyed MF, Arash K, Babak M: Rapid and sensitive determination of Montelukast in human plasma by high performance liquid chromatographic method using

H2: Efforts by domestic firms to incubate national innovative capacity are a key to economic growth for upper-middle income countries to escape or avoid falling into the upper-middle

The course is based on theoretical knowledge equivalent to the first 5 semesters of the bachelor's programme Aquaculture Management or Aquaculture Management with a certificate

The published measures of mindfulness include the Freiburg Mindfulness Inventory (FMI 30-item; Buchheld, Grossman and Walach, 2001), the trait Mindfulness Atten- tion and

One of the main and most important applications of GIS and remote sensing in developing countries is related to Disaster Risk Reduction (DRR) and disaster risk

carried out in the foreign language teaching/ learning classroom are as follows: ‘ speaking about the etiquette of other cultures ’ (25.�%), ‘ learning to handle intercultural

When rhetorical situations in writing centers involve multilingual individuals as writers or as colleagues, they may invite Generation 1.5 writing consultants or