6 Conclusion
We have described the progress of **integer** **linear** **programming** **algorithms** that has flourished in the past couple of years. Eisenbrand et al.’s insight to use Steinitz’s lemma made great improvements over the long- held bound established by Papadimitriou. Janset et al. built on Eisenbrand et al.’s work to achieve a running time as good as O(m∆) 2m + LP. Fomin et al. [4] have showed that if we assume the exponential time hypothesis, then Jansen et al.’s results are nearly optimal. In application, these results give state of the art bounds on problems with a limited number of constraints, such as the knapsack problem.

Show more
10 Read more

5.2 Future Research
We are cognizant of the fact that many more open questions remain. One of those is to delve more into the structure of the value function of an MILP and to determine whether the procedures suggested here to obtain the value function of an MILP with a single constraint might be extended to the general case in a practical way. A related, and perhaps more important, future path, is to seek more efficient **algorithms** for the subproblems to be solved at each iteration of the dual methods proposed in Section 3.4 for large-scale problems. Parallel to that, it is also not known how to obtain better-structured approximation of the value function that can be easily integrated with current optimization techniques. Finally, even though we have described possible ways of selecting a good warm-start tree to initiate the solution procedure of a modified instance, the quest of determining the best one remains unchallenged. In this sense, it is clear that there are many opportunities to be explored to improve the warm-starting techniques presented in this study.

Show more
197 Read more

4.4.3 Algorithm Control
CBC is one of the most full-featured black box solver of those reviewed here in terms of available techniques for improving bounds. The lower bound can be improved through the generation of valid inequalities using all of the separation **algorithms** implemented in the CGL. Problem-specific methods for generation of valid inequalities can be implemented, but column generation is not supported. CBC has a logical preprocessor to improve the initial model and tightens variable bounds using reduced cost information. Several primal heuristics to improve the upper bound are implemented and provided as part of the distribution, including a rounding heuristic and two different local search heuristics. The default search strategy in CBC is to perform depth-first search until the first feasible solution is found and then to select nodes for evaluation based on a combination of bound and number of unsatisfied **integer** variables. Specifying new search strategies can also be done easily.

Show more
52 Read more

Artiﬁcial Data Set We created two artiﬁcial hierarchical clustering data sets with a known ground truth hierarchy, so that we could use it to precisely answer some important questions about our new hierarchical clustering formulation. The ﬁrst hierarchy we created had 80 instances, 4 levels, was a balanced tree, and did not have overlapping clusterings. We used this data set to evaluate our basic ILP formulation which enforced transitivity (Figure 2) and compared the re- sults with standard hierarchical clustering **algorithms** (sin- gle, complete, UPGMA, WPGMA). The distances between two points a and b was measured as a function of the level of their ﬁrst common ancestor. We increased the challenge of the problem by increasingly adding uniform error to the distance matrix. Figure 4 shows the results and demonstrates our method’s ability to outperform all standard agglomera- tive clustering **algorithms** for standard hierarchical cluster- ing (no overlapping clusters), we believe this is because we solve a global optimization that is less affected by noise than **algorithms** that use greedy search heuristics. The F1 score was calculated by creating the set of all clusters generated within the learned and true dendrograms, and matching the learned clusters with the best true clusters. The experiment was repeated ten times for each error factor level and the results are shown in Figure 4.

Show more
Meet et al.[14] first studied cluster Hiring problem in a network, where they aim at hiring a group of experts for a set of projects which produces the maximum profit in which each project has an associated revenue, amount associated, which is obtained when the project completes. In their paper, they also study various attributes associated with a team and its members such as collaboration between two experts, the capacity of an individual expert, a salary of the experts. They also make sure that the hiring cost of the experts is within the budget assigned by the financial department of the company. They further propose two greedy **algorithms** and their results were obtained through experiments done on a large graph of experts.

Show more
75 Read more

We used the same set of 100 randomly generated BMILP instances from [45], except that all variables at both levels are required to be pure integers. These 100 instances were divided into ten groups, each containing ten instances of the same dimension. All **algorithms** were implemented in Matlab using TOMLAB/CPLEX as the ILP solver. The computational experiments were executed on a desktop computer with a 3 GB RAM and a 2.4 GHz CPU. Computational results are summarized in Table 2. The computation times reported in Table 2 are averages over ten instances within each group. For each group of instances using each traversing strategy of each algorithm, three time values are reported from top to bottom: how long it took to find the first bilevel feasible solution, to find the optimal solution, and to confirm its optimality. The last row in the table reports the average performance over all 100 instances. All average times are rounded to the nearest second. When an algorithm could not finish solving an instance within 999 seconds, we terminated the program prematurely and used 999 seconds for the average time calculation with “>” as a prefix.

Show more
29 Read more

The goal of this work is to further our current understanding of the computational nature of non- projective parsing **algorithms** for both learning and inference within the data-driven setting. We start by investigating and extending the edge-factored model of McDonald et al. (2005b). In particular, we ap- peal to the Matrix Tree Theorem for multi-digraphs to design polynomial-time **algorithms** for calculat- ing both the partition function and edge expecta- tions over all possible dependency graphs for a given sentence. To motivate these **algorithms**, we show that they can be used in many important learning and inference problems including min-risk decod- ing, training globally normalized log-**linear** mod- els, syntactic language modeling, and unsupervised

Show more
Generally, this approach seems to be similar to algebraic attacks because our targets are cryptographic **algorithms** that can be describe as a system of sparse equations. However, mixed-**integer** **programming** is far more flexible than alge- braic attacks as we can solve over- or underdetermined systems. Furthermore the fact that we consider optimization problem enables to easily incorporate proba- bilistic equation, which makes this attack suitable for side channel attacks, noisy keystream and general algebraic attack where we can find additional equations that do not hold in all cases. Therefore we think that mixed-**integer** program- ming is a valuable technique in cryptanalysis, but it is yet open to examine the real potential of it in different scenarios.

Show more
19 Read more

We consider also in our tests 6 instances of the GAP from the OR-Library. All instances gapi, i = 1, . . . , 6 have the same size, 5 agents and 15 jobs. The master and pricing problems are solved by CPLEX11.2 solver. Table 2 shows a comparison between LR and LD **algorithms** performances, when we apply for LR the second classical Dantzig-Wolfe decomposition, by relaxing the capacity constraints (cf section 4.2 ).

16 Read more

Our planning consisted in two week developement phases in which each one of them we would develop a core element of our solver. This has been the case: we started with the basics, and we kept adding things or improving the already existing code. We have reached the end of this project and we have a working SAT solver for **integer** **linear** **programming** problems. Furthermore, as we have seen in the results section, Intsat’s performance has been generally better than cutsat’s. There are, of course, some exceptions, but the experiments have given us some positive results. With all of this, we can safely state that we have accomplish our first objective: build a working solver at least as good as cutsat. Our second objective has been accomplished along the way: our solver is quite modifiable and easy to experiment on.

Show more
45 Read more

have been studied previously, and one solution approach is to use the Alternating Method [ 3 , 9 ]. If A and B are **integer** matrices, then the solution found by the Alternating Method is **integer**; however, this cannot be guaranteed if A and B are real.
In Section 3 , we show that we can adapt the Alternating Method in order to obtain **algorithms** which determine whether **integer** solutions to these problems exist for real matrices A and B, and find one if it exists. Note that various other methods for solving a TSS are known [ 1 , 5 , 12 ], but none of them has been proved polynomial, and there is no obvious way of adapting them to integrality constraints. In Section 4 , we show that, for a certain class of matrices, which represents a generic case, the problem of finding an **integer** solution to both systems can be solved in strongly polynomial time, and give a method in this case.

Show more
15 Read more

x ∈ R n . (1.25)
NLP has been studied basically for as long as LP, and identical as in the case of an MINLP, the difference between convex and non-convex NLPs can be made.
Convex NLPs are easier to handle because roughly speaking, any local minimum of a convex function is automatically a global one. This is the reason why often in the context of (MI)NLPs, the distinction between the attributes locally optimal and globally optimal is made when talking about solutions. General-purpose **algorithms** that have been proposed for NLPs can again be divided into two categories. There are **algorithms** that converge to local minima, for example the very popular class of interior-point **algorithms**. It should be clear from what was said before that such an algorithm automatically solves a convex NLP to global optimality. Also, such **algorithms** can be implemented efficiently in practice, and respective codes are often called local NLP solvers. For non-convex NLPs instead, spatial branching is usually required in order to solve a problem, and we will illustrate this concept in Section 1.2.4. Such approaches are exploited in the second category of **algorithms** for NLP, that attempt to always converge to a global optimal solution, regardless of whether the NLP at hand is convex or non-convex.

Show more
198 Read more

Win QSB is a software that includes quantitative tools used in decision support, it allows solving various Operational Research problems besides to statistical and simulation models. Among these tools there is **Linear** and **Integer** **Programming** which could be used to solve our problem.
Evolver is a Palisade Corporation software, it is one of Palisade Decision Tools software pack that could be used in decision support. Evolver can solve various problems using Genetic **Algorithms**. After shaping the model in Excel that includes: fitness function, variables and constraints; then determining the model in Evolver, we start the task of applying the repeated loop of Genetic **Algorithms** operations: selection, reproduction, mutation and replacement. The termination could be whether automatic according to chosen parameters or manual according to our estimation. The results obtained are as follows:

Show more
or the defender will reveal some information about their risk preferences, but this information may not be complete [64].
In this chapter, we focus on a situation in stochastic network interdiction when the defender’s incomplete preference information can be revealed from historical data on decisions that they have made in the past. We propose two different approaches to tackle this problem. The first approach follows the traditional decision-analytic per- spective, which separates the utility learning and optimization. Specifically, we first fit a piecewise **linear** concave utility function according to the information revealed from historical data, and then optimize the interdiction decision that maximizes the expected utility using this utility function. The second approach integrates utility estimation and optimization by modeling the utility ambiguity under a robust opti- mization framework, following [4] and [44]. Instead of pursuing the utility function that best fits the historical data, this approach exploits information from the his- torical data and formulates constraints on the form of the utility function, and then finds an optimal interdiction solution for the worst case utility among all feasible utility functions. The approaches proposed in this chapter forge an interdisciplinary connection between the areas of network interdiction and decision analysis. We use synthetic data to model empirically developed knowledge about the decision-makers’

Show more
149 Read more

In the most recent years there is a renovate interest for Mixed **Integer** Non-**Linear** Program- ming (MINLP) problems. This can be explained for different reasons: (i) the performance of solvers handling non-**linear** constraints was largely improved; (ii) the awareness that most of the applications from the real-world can be modeled as an MINLP problem; (iii) the challeng- ing nature of this very general class of problems. It is well-known that MINLP problems are NP-hard because they are the generalization of MILP problems, which are NP-hard them- selves. This means that it is very unlikely that a polynomial-time algorithm exists for these problems (unless P = NP). However, MINLPs are, in general, also hard to solve in prac- tice. We address to non-convex MINLPs, i.e. having non-convex continuous relaxations: the presence of non-convexities in the model makes these problems usually even harder to solve. Until recent years, the standard approach for handling MINLP problems has basically been solving an MILP approximation of it. In particular, linearization of the non-**linear** constraints can be applied. The optimal solution of the MILP might be neither optimal nor feasible for the original problem, if no assumptions are done on the MINLPs. Another possible approach, if one does not need a proven global optimum, is applying the **algorithms** tailored for convex MINLPs which can be heuristically used for solving non-convex MINLPs. The third approach to handle non-convexities is, if possible, to reformulate the problem in order to obtain a special case of MINLP problems. The exact reformulation can be applied only for limited cases of non-convex MINLPs and allows to obtain an equivalent **linear**/convex formulation of the non-convex MINLP. The last approach, involving a larger subset of non- convex MINLPs, is based on the use of convex envelopes or underestimators of the non-convex feasible region. This allows to have a lower bound on the non-convex MINLP optimum that can be used within an algorithm like the widely used Branch-and-Bound specialized versions for Global Optimization. It is clear that, due to the intrinsic complexity from both practical and theoretical viewpoint, these **algorithms** are usually suitable at solving small to medium size problems.

Show more
160 Read more

Another parameter under consideration for **integer** **programming** is the dual treedepth of the constraint matrix A. As we are only interested in giving a comparison to other **algorithms** in the literature, we omit a thorough discussion. See e.g. [27] for the necessary framework. Using this framework, a clean recursive definition of dual treedepth, can be written as follows. The empty matrix has a dual treedepth of 0. A matrix A with dual treedepth d > 0 is either block-diagonal and the maximal dual treedepth of the blocks is d, or there is a row such that after its deletion, the resulting matrix has dual treedepth d − 1. The necessary structure, i.e. which row of A should be deleted, can be determined via some preprocessing whose running time is negligible. For simplicity we assume this knowledge available to us. It is known, and can be proven with Proposition 6.1, that the Graver basis

Show more
16 Read more

since no academic license is available for BARON. SCIP and BARON where both run with their default settings; the default absolute and relative con- vergence tolerances of SCIP are both 10 −6 , the default absolute and relative convergence tolerances of BARON are 10 −9 and 0.1 respectively. The abso- lute convergence tolerance tol in Algorithm 2 was set to 0.01. The fact that the convergence tolerances are diﬀerent is irrelevant since we do not directly compare diﬀerent **algorithms** in this section. We recall that the transformed problem has at most 1 2 (n 2 c − n c ) + n c n d fewer bilinear terms than the origi- nal problem. Clearly as n c increases the reduction in the number of bilinear terms increases so we expect the transformation to become more eﬀective as n c increases. We therefore consider three diﬀerent sets of test problems for each algorithm; the ﬁrst containing problems with n c = n d , the second containing problems with n c > n d and the third with n c < n d . We shall not consider problems with n c = 0 or n d = 0 since these are no longer mixed problems and specialised solution methods have been developed to solve these problems.

Show more
230 Read more

ample contain cycles although they should be DAGs.
Most approaches have circumvented this problem by using global decoding over local scores and by imposing specific constraints upon decoding. But, those constraints were mostly limited to the pro- duction of maximum spanning trees, and not full DAGs. We perform global decoding as well but use **Integer** **Linear** **Programming** (ILP) with an objective function and constraints that allow non-tree DAGs.

11 Read more

Research (COIN-OR) project promotes the development and use of interoperable, open source software for operations research. I Furthermore, it is a repository consisting of several projec[r]

24 Read more

In this paper we present the first, to the best of our knowledge, discourse parser that is able to predict non-tree DAG structures. We use **Integer** **Linear** **Programming** (ILP) to encode both the objective function and the constraints as global decoding over local scores. Our un- derlying data come from multi-party chat dia- logues, which require the prediction of DAGs. We use the dependency parsing paradigm, as has been done in the past (Muller et al., 2012; Li et al., 2014; Afantenos et al., 2015), but we use the underlying formal framework of SDRT and exploit SDRT’s notions of left and right distributive relations. We achieve an F- measure of 0.531 for fully labeled structures which beats the previous state of the art.

Show more
12 Read more