• No results found

Sensitivity Analysis

Note from (1.7) that the input data for a general MILP can be defined by the quadruple (A, b, c, r). Assuming that we have modified the input data, we first partially answer the question of how to carry out sensitivity analysis after some or all of these modifications through a given strong dual function. Then we discuss sufficient conditions for optimality through dual functions appropriate for each type of modification and obtained from a primal solution algorithm.

Basic Observations. Assume that both the original and the modified problems are feasible and let (x∗, F) be an optimal primal- subadditive dual solution pair for (A, b, c, r). The following basic observations (Geoffrion and Nauss [1977], Wolsey [1981]) state the conditions under which primal feasibility, dual feasibility and optimality still hold for x∗ and F after changes made to the input. They are derived for PILP problems but can be easily adapted to MILP case.

1. (A, b, c) → (A, ˜b, c)

1a. F∗remains dual feasible due to the nature of dual functions.

1b. Let Y∗ = {y | F(Ay) = cy}. If Fremains optimal, then the new optimal solution is in Y∗since, given z(˜b) = c˜x, c˜x = F(˜b) = F(A˜x).

2a. x∗remains primal feasible.

2b. If F∗(a

j) ≤ ˜cj for all j, then F∗remains dual feasible.

2c. If F∗(aj) ≤ ˜cj when x∗j = 0 and ˜cj = cj otherwise, then x∗ remains optimal since

F∗remains dual feasible by part (2b) and

˜cx∗ = X j,x∗ j>0 ˜cjx∗j + X j,x∗ j=0 ˜cjx∗j = X j,x∗ j>0 cjx∗j + X j,x∗ j=0 cjx∗j = cx∗= F∗(b). (4.5)

2d. If cj ≤ ˜cj when x∗j = 0 and ˜cj = cj otherwise, then x∗ remains optimal. This is clear from part (2c) and gives sensitivity information independent of the optimal dual function.

2e. Let the vector τ denote the upper bounds on the variables. If ˜cj ≤ cj when x∗j = τj∗ and ˜cj = cjotherwise, then x∗remains optimal. Consider the problem

min{˜cx + τ (c − ˜c) | x ∈ S(b)} = min{˜cx + cx − cx + τ (c − ˜c) | x ∈ S(b)} = min{cx + (c − ˜c)(τ − x) | x ∈ S(b)}

Note that the optimal solution to this problem is x∗ since (c − ˜c)(τ − x) ≥ 0 and

cx∗≤ cx for all x ∈ S(b).

3. (A, b, c) → ( ˜A, ˜b, ˜c)

3a. If the original problem is a relaxation of the new problem, then x∗remains optimal.

3b. When a new activity (˜cn+1, ˜an+1) is introduced, (x∗, 0) remains feasible.

3c. Furthermore, if F∗(˜a

n+1) ≤ ˜cn+1, then (x∗, 0) remains optimal since F∗ remains dual feasible and cx∗ = F∗(b).

3d. When a new constraint (˜am+1) with right-hand side ˜b

m+1 is introduced, if x∗ is still feasible, then it remains optimal by part (3a).

3e. Furthermore, let the function ˜F : Rm+1 → Rm be defined by ˜F (d, d

Then ˜F is dual feasible for the new problem since ˜F is subadditive and ˜F (aj, ˜am+1j ) =

F∗(a

j) ≤ cj, j ∈ I.

Also note that if one drops the subadditivity requirement of F∗, then (2b), (2c) and (3c) are no longer valid.

Next, we study each case separately either to derive sufficient conditions to check optimality or to extract bounds on the optimal solution value for a modified problem using the information collected from a primal solution algorithm. As outlined in previous section, we derive these meth- ods over dual functions. Note that we have already discussed in Sections 2.2.1 and 2.2.5 these type of dual functions obtained from the last iteration (for cutting-plane algorithm) or the leaf nodes of the resulting tree (for branch-and-cut algorithm) of the corresponding primal solution algorithm. However, for the purpose of a sensitivity analysis, we can sacrifice more computational effort and more space to keep all dual information from all stages of the primal solution algorithm to extract a dual function to better approximate the value function.

Modification to the right-hand side. In this section, we analyze the effects of modifying the right-hand side of the primal instance (1.7). For simplicity, we assume that (1.7) is a pure in- teger program. We give the details of extending the following results to MILP case unless it is straightforward.

Klein and Holm [1979] give sufficient conditions for optimality when the cutting plane al- gorithm with Gomory fractional cuts is used. In particular, when the algorithm terminates, the optimal basis of the last LP is used to check the optimality of the solution for the new right-hand side, since it remains dual feasible for the modified problem with the cuts modified appropriately (see Section 2.2.1). To describe this notion of sensitivity analysis in our framework for PILPs, let FCPi be the dual function (2.26) extracted at iteration i, σi : Rm→Rm+i−1 be the recursive representation (as defined for (2.26) of the dependency of the right-hand side vector of the added cutting planes on the right-hand side of the original constraints and Bi be the corresponding set of indices of variables in the current optimal basis. Then we define the lower and upper bounding

approximations as follows: FCP(d) = max i {F i CP(d)} , GCP(d) =      min i {cBi(B i)−1σ

i(d)} if(Bi)−1σi(d) ≥ 0 and integer for some i

otherwise.

The characteristics of these functions are easy to understand from the properties of dual functions and LP duality. FCP is a dual function and yields a better approximation of the value function than that obtained bu considering each Fi

CP separately. On the other hand, GCPensures the primal feasibility of the basis of some iteration i and hence is an upper bounding approximation of the value function z. Consequently, we can state the sufficient conditions for optimality for the PILP instance with a modified right-hand side with the following proposition.

Proposition 4.1 For a given ˜b ∈ Rm, if FCP(˜b) = GCP(˜b), then z(˜b) = FCP(˜b).

If the test of Proposition 4.1 fails, we still get bounds on the optimal solution value of the modified instance, that is, GCP(˜b) ≥ z(˜b) ≥ FCP(˜b). These bounds, in turn, can be used as a measure of closeness to optimality.

A similar result for branch-and-bound( or branch-and-cut) framework can be derived as well in terms of dual feasible bases of LP subproblems of tree nodes. Note that we have so far only used the information from the leaf nodes of the branch-and-bound tree in order to obtain a dual function (see Sections 2.2.5 and 4.1). However, this scheme can be improved by considering the vast amount of dual information revealed during processing of the intermediate nodes.

i. A dual function can be extracted from the leaf nodes of some subtrees of the given branch- and-bound tree. Let T be the set of all subtrees of T such that K ∈ T if and only if K is connected and rooted at the root node of T and both left and right children of an intermediate node t ∈ K are also in K. Furthermore, for K ∈ T , let KL be the set of leaf nodes of subtree K. Then the dual function (4.2) can be strengthen as follows

F (d) = max

K∈T t∈KminL

ii. Assuming the variables have both lower and upper bounds initially, then the dual solution of each node is also a dual solution of the other nodes of the tree. Then the dual function (4.6) can be strengthen as follows

F (d) = max

K∈T t∈KminL

max

s∈T{v

sd + vslt− vsut} ∀d ∈ Rm. (4.7)

Schrage and Wolsey [1985] give a recursive algorithm to evaluate (4.6) considering the rela- tion between an intermediate node and its offsprings. For a K ⊂ T , let ¯t ∈ KL be such that

L(¯t), R(¯t) ∈ T where L(¯t) and R(¯t) are the left and right children of ¯t and consider the dual

functions FK, FK¯ obtained from the subtrees K and ¯K = K ∪ {L(¯t), R(¯t)} by the procedure

(4.2). Also, let us define

$t(d) = vtd + vtlt− vtut ∀d ∈ Rm

for all t ∈ T . Then for d ∈ Rm, we clearly have

max{FK(d), FK¯(d)} =      FK(d) if $t(d) ≥ min{$L(t), $R(t)} FK¯(d) otherwise. (4.8)

Theorem 4.2 Schrage and Wolsey [1985] For each node t ∈ T , let the function κtbe defined by

i. κt(d) = $t(d) if t is a leaf node.

ii. κt(d) = max{$t(d), min{κL(t)(d), κR(t)(d)}} if t is not a leaf node and L(t), R(t) are

the indices of t’s offsprings.

Also, let the index of root node be 0. Then, F (d) = κ0(d) ∀d ∈ Rmwhere F is defined by (4.6).

Proof. The proof follows from the relation (4.8), the inductive property of κ and from the structure of the branch-and-bound tree.

On the other hand, the dual function (4.7) can be obtained by Theorem 4.2 after replacing the definition of $tconsidering the dual solutions of other nodes, i.e., by letting

$t(d) = max

s∈T{v

sd + vslt− vsut} ∀d ∈ Rm. (4.9)

In this case, evaluating κ0, and hence the dual function (4.7), might be expensive due to the size of

T . One possible approach to achieving a computationally feasible scheme is to consider in (4.9)

only a subset of T . However, it is not clear how to determine such a subset so that the resulting dual function would still approximate the value function as closely as when using the set T .

We can extend the above analyses to branch-and-cut by the same procedure we applied to obtain a dual function from a branch-and-cut tree (see Section 2.2.5). We simply need to modify our definition of $tfor each node t by adding the dual information coming from the generated cuts. To see this, assume that the problem is solved with the branch-and-cut algorithm and that the subadditive representation or the original right-hand side dependency of each cut is known. Then the analog of dual function (4.6) can be obtained from Theorem 4.2 by setting

$t(d) = vtd + vtlt− vtut+ ν(t) X k=1

wtkFkt(σk(d)) ∀d ∈ Rm, (4.10)

where ν(t) is the number of total cuts added, Ft

i is the right-hand side dependency of cut i on the original right-hand side and σiis as defined in (2.2.1). If we do not know the original right-hand side dependencies of the cuts, we can still use the variable bounds and set for each node t

$t(d) = utd + utlt− utgt+ max{wt˜ht, wtˆht} ∀d ∈ Rm, (4.11)

where ˜htand ˆhtare defined as in Theorem 2.21.

Once we extract an upper bounding function (4.3) and a dual function (4.6) from a branch-and- bound (or branch-and-cut) primal solution algorithm, then we can easily state sufficient conditions for optimality for a modified instance with a new right-hand side just like Proposition 4.1.

Modification to the objective function. We have so far throughout the thesis only discussed the value function obtained by considering the optimal value of the primal problem as a function of the right-hand side and constructed our notion of duality based on this function. However, it is also possible to define a similar function as a function of the objective coefficients and derive similar/analogous results for this case. The objective value function can be written as

Z(q) = min

x∈S qx ∀q ∈ R

n, (4.12)

and for the sensitivity analysis purposes, we would be interested in deriving lower and upper bounding functions F and G from the primal solution algorithms satisfying

F (q) ≤ Z(q) ≤ G(q) ∀q ∈ Rn. (4.13)

Note that for any objective function q, the feasible solutions found during the branch-and- bound (or branch-and-cut) algorithm remain primal feasible and an upper bounding function anal- ogous to (4.3) can be derived easily. However, the optimal bases of the leaf nodes of the branch- and-bound tree T might not remain dual feasible for their LP subproblems and therefore, a valid lower bounding function cannot be obtained directly. For a given q ∈ Rnhowever, one possible way to obtain a valid lower bound is to re-optimize the LP subproblem of each leaf node t to obtain corresponding optimal feasible solutions. In particular, for t ∈ TL, let xtbe an optimal solution for the LP subproblem (4.1) with objective coefficients q. Then

F (q) = min

t∈TL

{qxt} (4.14)

would give a lower bound for the value Z(q). With these bounds, one can check the optimality of the modified problem by querying sufficient optimality conditions just as stated for cutting-plane method for right-hand side analysis through Proposition 4.1.

Other modifications. For any modification, the same procedure applies. Note that in many cases, the current bases of LP subproblems can be extended to dual feasible bases with none to a small effort. For instance, if a new constraint added a dual feasible basis for each node is readily available. Then, in a similar fashion, the Propositions 4.1 can be extended to this specific case. However, if multiple changes are allowed, it may be hard to obtain the dual bases or re-optimize the LP subproblems.

Related documents