Backtracking
●
Two versions of backtracking
algorithms
–
Solution needs only to be
feasible
(
satisfy
problem’s constraints)
•
sum of subsets
–
Solution needs also to be
optimal
The backtracking method
●
A given
problem
has a set of constraints
and possibly an objective function
●
The
solution
optimizes an objective
function, and/or is feasible.
●
We can represent the
solution space
for
the problem using a
state space tree
–
The
root
of the tree represents
0 choices
,
–
Nodes at depth 1 represent
first choice
–
Nodes at depth 2 represent the
second choice
,
etc.
–
In this tree
a
path
from a root to a leaf
Coloring a map
Problem:
Let G be a graph and m be a given positive integer. We want to discover whether the nodes of G can be colored in such a way that no two adjacent node have the same color yet only m colors are used. This
technique is broadly used in “map-coloring”; Four-color map is the main objective.
Four colors are
chosen as - Red,
Green, Blue and
Yellow
Now the map can
be colored as
here:-(a) The principal states and territories of Australia. Coloring this map can be viewed as a constraint satisfaction problem (CSP). The goal is to assign colors to each region so that no neighboring regions have the same color. (b) The map-coloring problem
represented as a constraint graph.
Figure:
Constraints: C = {SA WA, SA NT, SA Q, SA NSW, SA V, WA NT, NT Q, Q NSW , NSW V}
domain of each variable Di = {red, green, blue} We are given the task of coloring each region either red, green, or blue in such a way that no neighboring regions have the same color. To
formulate this as a CSP the following assumptions are made:
Problem:
Observation:-• Once we have chosen {SA = blue}, none of the five
neighboring variables can take on the value blue. So we have only 25 = 32 assignments to look at instead of 35= 243 assignments for the five neighboring variables.
here:-• Subset-sum Problem:
The problem is to find a subset of a given set S = {s1, s2,- - -, sn} of ‘n’ positive integers whose sum is equal to a given positive integer ‘d’.
• Example : For S = {3, 5, 6, 7} and d = 15, the solution is
shown below
:-Solution = {3, 5, 7}
Subset-sum Problem
• Observation : It is convenient to sort the set’s elements in
1 5 8 5 1 1 0 5 8 1 4 3 8 9 3 0 3 0 with 6 with 5 with 6 with 7 with 6 with 5 with 3 w/o 5 w/o 6 w/o 5 w/o 3 w/o 6 w/o 7 w/o 6 solution
14+7>15 9+7>15 3+7<15 11+7>15
0+6+7<15 5+7<15 8<15 7 0 3 5 6
Figure : Compete state-space tree of the backtracking algorithm applied to
the instance S = {3, 5, 6, 7} and d = 15 of the subset-sum problem. The number inside a node is the sum of the elements already included in subsets represented by the node. The
inequality below a leaf indicates the reason for its termination.
x x x x x x x
Sum of subsets
●
We will assume a
binary state space tree
.
●
The nodes at depth 1 are for including (
yes,
no
) item 1, the nodes at depth 2 are for item
2, etc.
●
The
left branch
includes
w
i
, and the
right
branch excludes
w
i.
●
The nodes contain the
sum of the weights
A Depth First Search
solution
●
Problems can be solved using depth first
search of the (implicit) state space tree.
●
Each node will save its depth and its (possibly
partial) current solution
●
DFS can check whether node v is a leaf.
–
If it is a leaf then check if the current solution
satisfies the constraints
A DFS solution
●
Such a DFS algorithm will be very slow.
●
It does not check for every
solution
state
(
node
) whether a solution has
been reached, or whether a
partial
solution can lead to a
feasible
solution
Backtracking
• Definition
: We call a node
nonpromising
if it cannot lead to a feasible (or optimal)
solution, otherwise it is
promising
•
Main idea
: Backtracking consists of
doing a DFS of the state space tree,
checking whether each node is promising
and if the node is nonpromising
Backtracking
●
The state space tree consisting of
expanded nodes only is called the
pruned state space tree
●
The following slide shows the pruned
state space tree for the sum of subsets
example
●
There are only 15 nodes in the pruned
state space tree
1 5 8 5 1 1 0 5 8 1 4 3 8 9 3 0 3 0 with 6 with 5 with 6 with 7 with 6 with 5 with 3 w/o 5 w/o 6 w/o 5 w/o 3 w/o 6 w/o 7 w/o 6 solution
14+7>15 9+7>15 3+7<15 11+7>15
0+6+7<15 5+7<15 8<15 7 0 3 5 6
Figure : Compete state-space tree of the backtracking algorithm applied to
the instance S = {3, 5, 6, 7} and d = 15 of the subset-sum problem. The number inside a node is the sum of the elements already included in subsets represented by the node. The
inequality below a leaf indicates the reason for its termination.
Backtracking algorithm
void
checknode
(node
v
) {
node
u
if (
promising
(
v
))
if (
aSolutionAt
(
v
))
write the solution
else //expand the node
for ( each child
u
of
v
)
Checknode
●
Checknode uses the functions:
–
promising
(
v
) which checks that the partial
solution represented by
v
can lead to the
required solution
–
aSolutionAt
(
v
) which checks whether the
partial solution represented by node
v
solves
Sum of subsets – when is a
node “promising”?
●
Consider a node at depth i
●
weightSoFar
= weight of node, i.e., sum of numbers
included in partial solution node represents
●
totalPossibleLeft
= weight of the remaining items
i+1 to n (for a node at depth i)
●
A node at depth i is
non-promising
if
(
weightSoFar
+
totalPossibleLeft
< S )
or (
weightSoFar
+ w[i+1] > S
)
●
To be able to use this “promising function” the
w
i
sumOfSubsets ( i, weightSoFar, totalPossibleLeft )
1) if (promising ( i )) //may lead to solution 2) then if ( weightSoFar == S )
3) then print include[ 1 ] to include[ i ] //found solution 4) else //expand the node when weightSoFar < S
5) include [ i + 1 ] = "yes” //try including 6) sumOfSubsets ( i + 1,
weightSoFar + w[i + 1],
totalPossibleLeft - w[i + 1] )
7) include [ i + 1 ] = "no” //try excluding 8) sumOfSubsets ( i + 1, weightSoFar ,
totalPossibleLeft - w[i + 1] )
boolean promising (i )
1) return ( weightSoFar + totalPossibleLeft S) &&
( weightSoFar == S || weightSoFar + w[i + 1] S )
Prints all solutions!
Initial call sumOfSubsets(0, 0, )
∑
i=1n
Backtracking for
optimization problems
● To deal with optimization we compute:
best - value of best solution achieved so far value(v) - the value of the solution at node v – Modify promising(v)
• Best is initialized to a value that is equal to a candidate solution or worse than any possible solution.
• Best is updated to value(v) if the solution at v is “better”
● By “better” we mean:
Modifying promising
●
A node is
promising
when
–
it is
feasible
and
can lead to a feasible solution
and
–
“there is a chance that a
better solution than
best
can be achieved by expanding it”
●
Otherwise it is
nonpromising
●
A
bound
on the best solution that can be
achieved by expanding the node is computed
and compared to
best
This problem is concern about
finding a Hamiltonian circuit in
a given graph.
Problem:
Hamiltonian Circuit Problem
Hamiltonian circuit is defined as a cycle
that passes to all the vertices of the
graph exactly once except the starting
and ending vertices that is the same
vertex.
A graph possessing a hamiltonian circuit
is said to be a hamiltonian graph. The
hamiltonian circuit is names after sir
William Rowan Hamilto
Figure:• (a) Graph.
• (b) State-space tree for finding a Hamiltonian
circuit. The numbers above the nodes of the tree indicate the order in which nodes are generated.
For example consider the given graph and evaluate the
Knapsack Problem
Knapsack Problem
A knapsack problem consist of profit vector P = (p1,p2,p3,
….,pn), a weight vector W = (w1,w2,w3,….wn) a capacity
C of the Knapsack.
Problem: How to pack the knapsack to achieve maximum
total value of packed items?
Problem, in other words, is to find
T i i T ii