Tables 4.2 - 4.4 show the results of the above application of Algorithm 4.1 on class 4, divided by groups, in more detail. Table 4.2 shows the instances in which we can close significantly more gap with strategy w than with w/o, i.e., group 1.
That means that the separation of wide intersection cuts on top of ordinary inter- section cuts is highly advantageous. Interestingly, on these instances the separation of only wide intersection cuts (strategy o) already leads to a gap closure that is significantly greater than the one achieved with strategy w/o, and almost reaches the one of strategy w. Another side effect of the separation of wide intersection cuts becomes eminent in Table 4.2, namely the reduction of the number of cuts that are separated in total. Apart from closing more gap, strategy w also decreases this number significantly with respect to strategy w/o. This is a desirable effect since the number of **constraints** in an LP, to which separated cutting planes have to be added in order to benefit from the additional dual gap closure, influence the computational effort when solving the latter. Remarkably, the significant gap closure achieved by strategy o requires a very limited number of separated cuts. The side effect concern- ing the number of cuts also persists throughout Table 4.3, where those instances are shown that do not benefit significantly from the separation of wide intersection cuts in terms of the gap closure, i.e. group 2. Again, strategy o is highly competitive and requires a very limited number of cuts in total to achieve almost the same gap closure as strategies w/o and w. In Table 4.4 we see those instances that do not result in any significant difference between strategies w/o and w, but still wide intersection cuts alone are able to close a positive amount of the initial dual gap, that is, group 3.

Show more
198 Read more

Most numerical approaches in the literature that consider buildability **constraints** can therefore be seen to fall into one of the following categories: (i) those that present topology optimization problem formulations of such complexity that only trivial scenarios can be solved, and (ii) those that present solution algorithms that produce structures with no guarantee or measure of optimality. The methodology presented in this paper seeks to extend the scale of truss layout optimization problems with basic buildability **constraints** that are solvable, so as to provide a potentially useful conceptual design tool for practitioners. To this end, an MILP formulation is used to find a globally optimal solution for a ground structure of finite resolution. The main contribution of this paper is to substantially increase the speed by which problems can be solved, and hence also the scale of problems solvable. This is achieved through the runtime generation of some **constraints** (so- called lazy **constraints**), as part of a two-stage design process. The MILP problem forms the first stage, followed by an optional second non-**linear** geometry optimization refinement stage. Application of the developed procedure to a range of problems allows observations to be drawn on the nature of the structures identified as optimal under the imposed buildability constraint, with results compared with analytical solutions in the case of one of the problems considered.

Show more
24 Read more

Herein, we first overview the previous results on the structure of the value function. Next, we look more closely at the structure of the value function of an MILP with a single constraint and non- negativity **constraints** on all variables. We show for this case that the value function is uniquely determined by a finite number of break points and at most two slopes, derive conditions for the value function to be continuous and suggest a method for systematically extending the value func- tion from a specified neighborhood of the origin to the entire real line. Although we focus here in particular on this specific case, we point out that a number of these results hold in more gen- eral settings. Then we discuss both upper and lower approximations of the value function of a MILP through the value functions of single-constraint relaxations, dual functions and restrictions.

Show more
197 Read more

have assumed a problem given in standard form, upper and lower bounds on variable values are typically present and are handled implicitly. Such bound **constraints** take the form l ≤ x ≤ u for l, u ∈ R n for all x ∈ P I . Even if no such bound **constraints** are initially present, they may be introduced during branching. Let ¯ c j be the reduced cost of nonbasic **integer** variable j, obtained after solving the LP relaxation of a given subproblem and let ˆ x ∈ R n be an optimal fractional solution. If ˆ x j = l j ∈ Z and γ ∈ R + is such that c > x + γ¯c ˆ j = β, where β is the objective function value of the current incumbent, then x j ≤ l j + bγc in any optimal solution, so we can replace the previous upper bound u j with min(u j , l j + bγc). The same procedure can be used to potentially improve the lower bounds. This is an elementary form of preprocessing, but can be very effective when combined with other forms of logical preprocessing, especially when the optimality gap is small. Note that if this tightening takes place in the root node, it is valid everywhere and can be considered an improvement of the original model. Some MILP solvers store the reduced costs from the root LP relaxation, and use them to perform this preprocessing whenever a new incumbent is found.

Show more
52 Read more

inequalities and returns those that are violated by a solution to the LP. These cuts are returned to the node processing module to be added to the candidate node LP.
SYMPHONY is capable of performing warm-starting and sensitivity analysis on MILPs.
The primary algorithm that SYMPHONY uses is branch-and-cut. When the user enables that warm-starting description should be kept via specifying the appropriate parameter, the solver collects and stores information from the final tree used to solve the problem, as well as other auxiliary information needed to restart the computation of a poten- tially modified instance. The information collected from the tree contains information about each solution to the relaxation associated with each node in the tree, including the list of active variables and **constraints** and the branching decisions that have led to the nodal problem, as well as information needed to perform sensitivity analysis on the nodal LP including the optimal basis and dual information. The auxiliary information stored includes information about upper and lower bounds on the tree and the best feasible solution found so far. After the warm-start description is stored from the initial solve, the user can modify the parameters used to solve the initial problem and make a call to sym warm solve() to solve the modified instance.

Show more
177 Read more

In the most recent years there is a renovate interest for **Mixed** **Integer** Non-**Linear** Program- ming (MINLP) problems. This can be explained for different reasons: (i) the performance of solvers handling non-**linear** **constraints** was largely improved; (ii) the awareness that most of the applications from the real-world can be modeled as an MINLP problem; (iii) the challeng- ing nature of this very general class of problems. It is well-known that MINLP problems are NP-hard because they are the generalization of MILP problems, which are NP-hard them- selves. This means that it is very unlikely that a polynomial-time algorithm exists for these problems (unless P = NP). However, MINLPs are, in general, also hard to solve in prac- tice. We address to non-convex MINLPs, i.e. having non-convex continuous relaxations: the presence of non-convexities in the model makes these problems usually even harder to solve. Until recent years, the standard approach for handling MINLP problems has basically been solving an MILP approximation of it. In particular, linearization of the non-**linear** **constraints** can be applied. The optimal solution of the MILP might be neither optimal nor feasible for the original problem, if no assumptions are done on the MINLPs. Another possible approach, if one does not need a proven global optimum, is applying the algorithms tailored for convex MINLPs which can be heuristically used for solving non-convex MINLPs. The third approach to handle non-convexities is, if possible, to reformulate the problem in order to obtain a special case of MINLP problems. The exact reformulation can be applied only for limited cases of non-convex MINLPs and allows to obtain an equivalent **linear**/convex formulation of the non-convex MINLP. The last approach, involving a larger subset of non- convex MINLPs, is based on the use of convex envelopes or underestimators of the non-convex feasible region. This allows to have a lower bound on the non-convex MINLP optimum that can be used within an algorithm like the widely used Branch-and-Bound specialized versions for Global Optimization. It is clear that, due to the intrinsic complexity from both practical and theoretical viewpoint, these algorithms are usually suitable at solving small to medium size problems.

Show more
160 Read more

5.1 Converting the Boolean System into a Set of **Linear** **Constraints** The main part when translating a Boolean equation system into a **mixed** in- teger programming problem is transforming the Boolean equations into **linear** **constraints**. This is the only part where the Boolean equation system is directly used in the modeling of the **mixed** **integer** programming problem. The transla- tion from the set of Boolean equations into a set of **linear** **constraints** is basically performed in two steps. First, after a possible preprocessing of the Boolean equa- tion system, the Boolean equations are converted into equations over the reals or the integers. This yields a system of non-**linear** equations. The next step is to linearize these equations by the replacing non-**linear** terms by new variables and additional inequality **constraints** which ensure that new variables behave according to the non-**linear** terms they are replacing. Depending on the choice of the conversion method we obtain a different set of **constraints**. Furthermore, the conversion methods implies integrality restrictions on some variables. Thus, the choice of the conversion method together with the set of Boolean equation mainly determines the feasible set.

Show more
19 Read more

Selected test problems of MIPLIB 2010 are included into the benchmark test set. This set contains 87 problems, all of them are in the “easy”
group. In our study we work with the subset of the benchmark set that has 30 problems. The main reason for this reduction is given by presentation limits given by the given space for the paper. Our data set is presented in Tab. I. The ﬁ rst column of this table contains identiﬁ cation of the instance as it is denoted in the MIPLIB library. The next ﬁ ve columns informs about the number of **constraints** (Rows), total number of variables (Columns), and the number of **integer**, binary and continuous variables of the instance.

Show more
Second, our convergence proof relied on norm-equivalence in finite-dimensional space. Care must be exercised in defining a function space framework that ensures that the embedding into the space defining the **linear** model is continuous.
Furthermore, an interesting algorithmic enhancement can be obtained by consider- ing a SLPECEQP algorithm with Newton lifting for problems with many intermediate variables. The lifted Newton approach [AD10] has been proven to be advantageous for optimization problem with a tree-structure of intermediate variables. Such a structure is present in many real-world problems and in particular in a natural way in problems arising as multiple shooting discretizations of optimal control problems. Efficient exploitation of the structure of lifted Newton problems is possible at both the level of the LPEC part and the EQP part of the algorithm.

Show more
220 Read more

Chapter 6
Computational Results and Discussion
In this chapter we present computational results which illustrate the eﬀec- tiveness of the methods developed in chapters 4 and 5. The methods devel- oped in chapter 4 for solving MIQPs are tested in section 6.1. Randomly generated problems with three diﬀerent types of **constraints** are used as test problems. The problems are generated in such a way that we can specify that the Hessian should be either invertible, singular or have a positive def- inite n c th principal leading submatrix. When the Hessian is invertible we can also specify the percentage of negative eigenvalues. This allows us to test all three of the methods developed in chapter 4. The methods developed in chapter 5 are tested in section 6.2. Thirty diﬀerent test functions were used to generate test problems. Some of these functions are only deﬁned for ﬁxed n while others are deﬁned for n > 0. These test functions were used to generate a total of 118 test problems for the derivative free algorithms in chapter 5. The test problems were also solved using the state of the art derivative free MINLP solver NOMAD. Some concluding remarks are made in section 6.3.

Show more
230 Read more

The last step is to perform the post-optimization phase to obtain adjusted reconfiguration rates, so as to saturate PM capacity whenever reconfiguration is applied. To reintroduce the continuous variable ρ p , we keep the assignment variables x v p to the values x v p , defining sets V ( p ) , found by the previous MILP. We fix time variable t p to the value found (t p ). Note that the time objective does not appear in the cost function as it is upper bounded and that considering the time variable would yield a product of continuous variables t p ρ p . Hence only the energy and dynamism objectives could remain. However the dynamism objective would tend to decrease the reconfiguration rate even if the capacity constraint is not saturated, which is not permitted. We only keep the energy objective and add a maximization objective on the reconfiguration rate with a weight W setup in such a way that this objective has less importance than the energy one. Observing the obtained MILP shows that, as the capacity **constraints** are satisfied by the assignment, the solution resorts to selecting the largest reconfiguration value and the smallest π f satisfying the capacity constraint, which naturally ensures that the capacity **constraints** are saturated if the reconfiguration rate is strictly smaller than 1.

Show more
18 Read more

Based on the above analysis, we argue that the Greek wholesale electricity market is a day- ahead mandatory pool scheme that provides a day ahead firm price based upon the supply/demand balance that ensures efficient short term dispatch taking in to account generation unit **constraints**, reserve requirements and a simplified transmission system zonal constraint mechanism. The day- ahead procedure (Day Ahead Market Clearing) produces a SMP for each settlement period (one hour) and a 24 hour production schedule for each unit. The solution of the day-ahead procedure is based on the co-optimization of the energy offers (energy market) and reserve offers (balancing market) in order to satisfy the energy demand and reserve requirements, while the transmission system zonal constraint mechanism will introduce an additional constraint. A regulated SMP price cap will be determined in order to prevent excessive price spikes in the event that insufficient capacity is declared available to meet the demand. Offers will be firm at the day-ahead market.

Show more
36 Read more

We have developed an online tool for our proposed algorithms.
The online tool provides many features, such as exclusion of adverse drug-drug interaction, constraint on the number of drugs, iterative search, highly efficient solver and email notification. In addition, the online tool allows the user to assign the weight balance on the on-target and off-target set as well as to choose how to handle drugs with opposite actions on input genes. The online tool is also flexible to include new **constraints**, such as the importance weight of input disease genes and the penalty weight of potential off-target genes. The availability of the online tool will make our algorithm accessible not only to the computational biologists but also to the bench scientists.

Show more
In some practical applications, expensive resource and time consumption seriously obstruct the precise computing for the minimum, motivating many heuristic and approximate algo- rithms. A computational cheap heuristic method for MILP is accepting the rounding of corresponding LP solutions. Though the rounding solution is not even a feasible one, it has heuris- ticed many algorithms with supplementary **constraints**, such as local branching[25].

In many practical projects in construction it is possible to perform individual activities in alternative ways (modes). In addition to changing the crew formation modes, also the substitution of construction methods or materials can by profitable for contractor and owner. The modes differ in processing time, lags between activities, and resource requirements, so they affect the project’s outcomes. Therefore, project scheduling problems with multi technological modes aims at finding the optimal order of activities, the start times of each activity, and the execution modes for all activities in a project while verifying a set of precedence and resource **constraints**.

Show more
of raw material, emissions restriction policies, product demand for subsequent planning periods etc. This kind of uncertainty is more likely to appear on the right-hand side (RHS) of the **constraints** and in the objective function’s coeﬃcients (OFC). Consideration of un- certainty in process systems engineering is of great importance as it can endanger the optimality or even the feasibility of a solu- tion that was computed in a deterministic way ( Apap and Gross- mann, 2017; Sahinidis, 2004 ). In an effort to avoid such occasions, a number of mathematical formulations and solution techniques have been proposed in the literature with the goal to create mod- els which are robust towards uncertainty. Stochastic programming ( Apap and Grossmann, 2017; Bertsimas and Sim, 2004; Birge and Louveaux, 2011 ) relies on the availability of historical data which can provide statistical information about the behaviour of uncer- tain parameters. In stochastic programming, the unknown parame- ters are assumed to follow a discrete probability distribution and the decision variables are classiﬁed into two groups: “here and now” and “wait and see”. Depending on the instances that the un- certainty is expected to be revealed, the mathematical program is referred to as “two-stage” or “multi-stage” with the objective to minimise the cost of the initial actions. Robust optimisation (RO), assumes that all **constraints** of the optimisation should never be violated and aims to provide a solution that is feasible regardless of the extent of the actual uncertainty. Because of that, RO is of-

Show more
18 Read more

4 Conclusions
The problem of selecting relevant variables for clustering has been formulated as a combinatorial optimization model in this paper. The model is solved with **integer** **linear** programming or heuristic methods to determine the best variable selection subroutine for a clustering application. Extensive tests on simulated data provided evidence that the approach can determine the relevant features and improve the clustering quality. Future research can be devoted to improve some computational issues of the problem. For example, the radius formulation is a methodology that has a strong connection with the pseudo-boolean repre- sentation of the objective function, see [1, 11, 12]. In this way, one can refine the MILP formulation using even less coefficients and **constraints**, as proved and experimented in a similar problem in [24]. Moreover, the problem of se- lecting relevant features is not only important in clustering, but also in other statistical techniques such as classification or supervised learning, see [25, 48], support vector machines, see [34], and **linear** regression, see [27]. It is likely that the methods developed here can be modified to fit these relevant applications.

Show more
33 Read more

Here, belotti also describes about the tree search method which is one of the widely used method to solve MINLP problems. The tree search method has been classified into single tree and multi-tree method. These two classes of methods solve a problem involving function like the convex type. The classical single tree method uses nonlinear branch and cut method, cutting plane method, and branch and bound method for solving MINLP problems. The conventional multi- tree approach comprises of both outer approximation and benders decomposition methods, in order to obtain global optimal solution for the considered problem. By combining the above approaches, a new hybrid method has been proposed in the literature, for solving convex MINLP problems more effectively, in terms of computation time and the quality of optimal solutions. In [11], Bonami et al. have provided a summary of various convex MINLP algorithms and software used to solve multiple MINLP problems. It is too challenging to solve nonconvex MINLP problems because it consists of nonconvex functions and nonlinear **constraints** in the objective function. Methods like Spatial branch and bound Piecewise **linear** approximation, and Generic Relaxation Strategies are used to solve nonconvex MINLP problems. In [12], Tawarmalani and Sahinidis N.V have explained about global optimization theory for solving MINLP problems using different algorithms and solvers.

Show more
11 Read more

ni − t ja k ≥ −M (1 − ρ ji ) ∀j ∈ A, i ∈ D (32)
Figure 4: Multiple runway entrance and exit points
7. Capacity **Constraints** for Runway Crossing Queue
At Dallas/Fort Worth International Airport there are four exit edges off the arrival runway 17C. Each of these edges have capacity **constraints** so that each does not exceed a prescribed capacity, e l . If c l denotes a runway crossing node (Fig. 4), then arrival aircraft, which use edge l, must leave that node before the edge capacity e l is violated. e l is the capacity, given in units of aircraft, for the l th taxi exit. In order to formulate this mathematically, it is easier to introduce the time T ij l . T ij l is equivalent to ET D j , and also we require that i lands before j, i and j both use the same edge l, and that e l aircraft arrive between i and j.

Show more
18 Read more

Primal Decomposition and Constraint Generation for Asynchronous Distributed **Mixed**-**Integer** **Linear** Programming
Andrea Camisa, Giuseppe Notarstefano
Abstract— In this paper, we deal with large-scale **Mixed** **Integer** **Linear** Programs (MILPs) with coupling **constraints** that must be solved by processors over networks. We propose a finite-time distributed algorithm that computes a feasible solution with suboptimality bounds over asynchronous and unreliable networks. As shown in a previous work of ours, a feasible solution of the considered MILP can be computed by resorting to a primal decomposition of a suitable problem convexification. In this paper we reformulate the primal de- composition resource allocation problem as a **linear** program with an exponential number of unknown **constraints**. Then we design a distributed protocol that allows agents to compute an optimal allocation by generating and exchanging only few of the unknown **constraints**. Each allocation is iteratively used to compute a candidate feasible solution of the original MILP. We establish finite-time convergence of the proposed algorithm under very general assumptions on the communication network. A numerical example corroborates the theoretical results.

Show more