We introduced a new algorithm, discriminant ECOC, for designing compact error correcting output codes. The result is a multiclass classifier that runs faster—since it uses fewer classifiers—and requires less training time, while maintaining—and improving in some cases—the performance of the rest of ECOC approaches. This methodology is also the first one to deal successfully with the problem of the design of application dependent discrete ECOC matrices. Discriminant ECOC algorithm has been applied successfully to two problems: First, the UCI database for validation purposes and, second, to a real computer vision application, traffic sign recognition. From the different experiments, we observe that the building process of the ECOC matrix is of great importance. We can
We would like to treat our problem as a mix between a DVRP and a SVRP. In our multi- stage formulation, we divide the time horizon into intervals. The goal is then to make a plan for how to serve customers within the next time interval, using known, deterministic information, as well as knowledge about distributions for the stochastic elements. This plan should be feasible over the whole (remaining) time horizon. Assuming that the routes followed in subsequent time-intervals are optimal, the plan should minimize the expected cost of serving all customers, stochastic as well as deterministic, before the end of the day. The cost of a plan must be calculated after all stochastic variables are realized, and is dependent on the number of vehicles used as well as the total travel distance.
Simulated Annealing (SA) algorithm, based on the equivalence of annealing process of metals, was worked on by Osman and Potts . They expanded a set of four different SA based heuristic algorithms for the PFSP and proved that their algorithm offers improved results compared to the NEH heuristic. Rajesh Gangadharan and Chandrasekharan Rajendran  judged suggested annealing for transformation flow shop scheduling problem (PFSP) with the identical-objective of minimizing makespan and total flow time. They recommended two new heuristics to provide the seed sequences for the SA heuristic. An additional imitated annealing based heuristic with bi-criteria minimization of makespan and maximum tardiness was expanded by Chakravarthy and Rajendran . Chen et al.  extended a simple Genetic Algorithm (GA) for the PFSP with a variety of improvements. The preliminary population was developed with CDS heuristic and RA heuristic. Only the intersect operator was used with no alteration and the crossover used was fractionally mapped crossover or PMX. Reeves  also developed a GA in which the off springs engendered do not reinstate their parents but individuals that have fitness value below average. He used a crossover called C1 or one-point order crossover and used a shift alteration. Two new hybrid genetic algorithms with minimization of makespan as objective has also been proposed by Ruiz et al.for PFSP. Their algorithms use new genetic operators, advanced techniques like that of the hybridization with a local search and an proficient population initialization and also a new generational format. Now icki et al.  recommended a fast tabu search algorithm for finding least make span using a modified NEH algorithm to acquire initial solution. Their algorithm is based on tabu search method with a explicit neighborhood definition which utilizes a ‘block of jobs’ notion. Computational experiments of up to 500 jobs and 20 machines show its outstanding numerical properties.
Nagano et al. (2017) proposed an evolutionary clustering search (ECS) for the total tardiness blocking flow shop problem. The proposed ECS uses NEH-based procedure to generate an initial solution, the genetic algorithm to generate solutions and a variable neighborhood search (VNS) to improve the solutions. The proposed method was compared to the iterated local search (ILS) (Ribas et al., 2013). Computational tests show superiority of the new method for the set of problems evaluated. Also 67 values of the best-know values of the total tardiness criteria table presented by Ribas, Companys and Tort- Martorell (2013) were updated. The first study to address the flow shop problem with limited buffers and sequence and machine dependent setup found in the literature is Norman (1999). The evaluation criterion used in this study is the minimum makespan. A Tabu search and two adapted constructive heuristic methods (NEH and PF) were presented to solve the problem. A greedy improvement procedure was added to the constructive heuristics. Nine hundred problems were generated to evaluate the proposed methods varying setup times, buffer sizes and number of jobs and machines.
Caricl et al (2007) in his paper basically proposed and proposed a framework model for solving large and complex vehicle routing problems. He used a Script based modelling language. The performance measures and algorithm is very clearly explained and also a detailed description of how this framework helps in solving vehicle routing problems with time windows. This is a programming language which is very much similar to other languages which consists of syntax, data types, Boolean and various characteristics like loops. Here the list of customers to be served at different nodes and list of vehicles available and their capacity are stored in the database and attributes are assigned to them. But in this paper he was not able to solve the VRP with time windows because in practical and real time view it would become much complex in calculations and due to his time and other constraints he applied on capacitated vehicle routing problem and evaluated the results. The major objective of the paper is finding the best routes by limiting the usage of vehicles and allocating the nodes to the vehicle very effectively there by reducing the cost of the transportation and various other costs.
from the LP relaxation directly without any additional effort. The algorithm is compared with Senju-Toyoda’s  method and KOCH . KOCH starts with a feasible solution and stops when it reaches infeasibility, while the other two starts with an infeasible solution and try to reach feasibility. KOCH performs better when the constraints get tighter, in opposite to the other two. Senju-Toyoda’s method is better when the constraint is loose, in terms of the quality of the solution. For large size problems, Multi-Knap almost matches KOCH in performance, and reduces the computational time significantly. In contrast to the dual gradient method, Toyoda  developed a primal gradient method, which is an enhanced version of KOCH . By incorporating the primal gradient method of Toyoda  and the greedy method for the solution of standard knapsack problem, Loulou and Michaelides  developed a greedy-like heuristicmethod. The greedy-like heuristicmethod expands the feasible solution by including the variable with the maximum pseudo-utility instead of the effective gradient as in Toyoda. The pseudo- utility of a variable represents the profit per unit of resource consumed by this variable. Computational results show that the greedy-like heuristicmethod performs better than the primal dual method in terms of solution quality. However, the CPU time consumed is slightly higher.
7147 establish a classification model that is easy to understand. In addition, those algorithms show competitive performance results as compared to the traditional classification techniques and outperforms in some application domains. Nevertheless, these algorithms also have several issues and opportunities. Those issues include the local optimization problem, parameter setting, computationally expensive rule pruning, specific data requirements (i.e, less-quality dataset and attributes) and, finally, the requirement of open- source implementation and real-life research application. Consequently, those issues may reduce the classification accuracy and increase the computational time of the algorithm. This study has presented various enhancement possibilities and provided promising research directions for future studies by considering the existing issues and challenges. First, the right balance between exploration and exploitation should be determined. Second, the ACO parameters should be set during the learning process. Third, ACO requires a preprocessing step to improve the quality of the dataset and discretize all continuous attributes. Furthermore, although the pruning procedure compulsory which used to avoid overfitting, its computational cost is the highest in the ACO-based classification algorithms. Consequently, the learning time of existing ACO-based classification algorithms is considerably longer compared with other rule-based classification algorithms, such as C4.5 and RIPPER. Real-life applications, as well as open-source implementation, similar to RapidaMiner or Weka, are necessary to facilitate experimental design. Finally, ACO algorithm variants similar to ACO for TSP have not been fully applied in a classification rule discovery context. Using them could produce robust classification results.
Lower Bound, for the undirected CARP. However, the CARP is a NP-hard problem (Golden & Wong, 1981) and these exact methods are not able to solve the large-scale instances in polynomial time. Therefore, due to the computational complexity of the problem, there have been remarkable attempts by researchers in developing heuristic and metaheuristics algorithms to solve it. Tabu search is the first metaheuristics proposed by Hertz et al., (2000). Here solutions breaking vehicle capacity are accepted but penalized. Three improvement procedures (Shorten, Drop, Add) initially explained by Hertz et al. (1999) and four new ones (Paste, Cut, Switch, and Postopt) are used. Lacomme et al. (2004a) proposed a memetic algorithm to solve an extended version of the CARP; each required edge is represented by two directions. The chromosomes are encoded as large tours. Each chromosome is evaluated optimally using a splitting procedure, which partitions the large tour into feasible trips. Ant colony system (Lacomme et al., 2004b) is one of the other metaheuristics in which two types of ant are used to work through the problem. These are elitist ants that make the solutions converge towards a minimum cost solution and non-elitist ants that guarantee diversification to prevent being trapped in a local minimum. Beside metaheuristics, heuristics are better with required short CPU time. Furthermore, they are implemented easier and provide a good initial solution to start of many metaheuristics. However, metaheuristics give solutions with more quality. Augment-Merge (Golden & Wong, 1981), Path- scanning (Golden et al., 1983), Double Outer Scan heuristic (Wøhlk, 2005), Ulusoy’s heuristic (Ulusoy, 1985), Ellipse Rule based Path-scanning heuristic (Santos et al., 2009), and Construct-Strike (Pearn, 1989) are some of the known heuristics to solve the CARP. For a detailed overview of the main characteristics of the heuristics in the literature, the readers may refer to Wøhlk (2008). Yet, the development of enhanced heuristics is an important research area for the CARP.
To the best of our knowledge this problem has lit- tle or no previously published research articles. Modi- fied but similar versions of the problem do sometimes appear as sub-problems in methodologies for vehi- cle routing problems though. For example, related problems are solved using branch and bound and con- straint programming algorithms in Bent and Van Hen- tenryck (2006); Shaw (1998). In Ropke and Pisinger (2006) also use a heuristicmethod to solve a version of the sub-problem for pickup and delivery with time window problems.
This paper has investigated the job shop scheduling problem with the objective of minimizing the makespan. The main purpose was to produce reasonable schedules very quickly. A simple and easily extendable heuristic based on a constructive procedure has been presented. The algorithm has beed examined on benchmark instances from the literature in order to evaluate its performance. The computational results have shown that even the relatively straightforward implementation of the approach as presented here, could yield good quality solutions with very little computational effort. Since the proposed method is a heuristic, its results cannot be compared in a meaningful way with those of the methods evaluated as they are metaheuristic based algorithms. However, the computational times show the interest of the heuristic, since in a fraction of a second on average, it produces very good solutions. Although the solutions produced by this simple heuristic are weakly dominated the solutions of the metaheuristic methods evaluated, the procedure is useful in applications that deal with real time systems and that involve the generation of initial schedules for local search and metaheuristic algorithms. Further research needs to be conducted in applying other criteria in the TC in order to improve the solution quality and to adapt the approach to the flexible job shop scheduling problem. References
The structure of the neighborhood is by far the most important feature that af- fects the quality of the local optima obtained via neighborhood search algorithms in combinatorial optimization problems (also referred to in literature as local search algorithms). It is generally desirable to have large neighborhoods so as to improve the possibility of having good quality local optima. On the other hand, a large amount of time may have to be spent on searching large neighborhoods leading to large amount of time being spent for every run. One generally desires to perform several runs of an algorithm with different starting points so as to increase the chance of finding a global optimal solution, and having large execution times for each run leads to fewer runs. Thus a large neighborhood may not necessarily produce an effective heuristic unless we can search the neighborhood efficiently.
Soft computing based approaches are increasingly being used to solve different NP complete problems, the development of efficient parallel algorithms for digital circuit partitioning, circuit testing, logic minimization and simulation etc. is currently a field of increasing research activity. This paper describes evolutionary based approach for solving circuit- partitioning problem. That implies dividing a circuit into non- overlapping sub circuits while minimizing the number of cuts after the division and balancing the load associated to each one. The paper shows the effective partitioning for achieving peak chip performance and reducing the cost and time of the design and manufacturing process.
Panneerselvam Senthilkumar and Sockalingam Nara- yanan  have done a comprehensive review of litera- ture of single machine scheduling problem with uniform parallel machines, in which 17 classifications were dis- cussed. Prabuddha De and Thomas E.Morton  have developed a new heuristic to schedule jobs on uniform parallel processors to minimize makespan. It is tested on a large number of problems for both uniform and identi- cal processors. They found that the solutions given by the heuristic for the uniform parallel machines schedul- ing are within 5% of the solutions given by the branch and bound algorithm. Bulfin and Parker  have consid- ered the problem of scheduling tasks on a system con- sisting of two parallel processors such that the makespan is minimized. In particular, they treated a variety of modifications to this basic theme, including the cases of identical processors, proportional (uniform) processors and unrelated processors. In addition, they suggested a heuristic scheme when precedence constraints exist.
Hyper-heuristics are high level search methodologies and can be broadly classified into selection hyper-heuristics, also known as ‘heuristics to choose heuristics’, or generation hyper-heuristics (Burke et al. 2013). The solution method used in this work is based on the selection type of hyper- heuristics that controls a set of pre-defined low level heuristics under an iterative framework. The proposed approach aims to exploit several low level heuristics, each LLH attempts to enhance an aspect of the quality of the current solution during the optimisation process. Traditionally, selection hyper-heuristics identify two main consecutive stages, a selection stage to select a suitable heuristic and apply it to the candidate solution, and a move acceptance stage to decide whether to accept or reject the newly generated solution. The sequence-based selection hyper-heuristic replaces the first stage of the traditional selection hyper-heuristic framework in order to select sequences of heuristics instead of a single heuristic. These are then applied sequentially to the current solution. The proposed method has been successfully applied to an inventory routing problem, a subject of the ROADEF/EURO 2016 challenge, and the results demonstrate the effectiveness of the method, being the winner of the challenge against 41 teams across 16 different countries, producing the best solutions across all of the released problem instances.
The Travelling Salesman Problem (TSP) is one of the most famous combinatorial problems of all time. A salesman visits N cities with given positions and returns finally to his city of origin. Each city is to be visited only once, and the problem is to find the shortest possible route. In the field of the Travelling Salesman Problem (TSP), there are three main versions of TSP studied. The type of problem depends on how the input function is given Euclidian symmetric or asymmetric, or random distance matrixes. The term Euclidian comes from the representation of each city and the distance of the edges between them. Travelling Salesman Problem has been one of the most interesting and challenging problem in the literature.
In this subsection, we restrict the clusters to remain assigned to their nearest facility while we solve the allocation problem. Note that as the total demand of those customers in a given cluster is rel- atively smaller compared to the capacity of the facil- ity as given by the value of b, the assignment of a given cluster to its nearest facility is therefore feasi- ble. In the case where the facilities happen to have diﬀerent capacities, the proposed relaxation scheme needs to be modiﬁed to cater for such a situation. In this scheme we temporarily omit the customers of the clusters when we are solving the TP. By doing this, we are forcing the customers to remain served by their nearest facilities until the location of the facilities become unchanged from one iteration to the next. However, if by assigning a cluster point to its nearest facility violates the supply constraint of that facility, the point will be omitted from the cluster. This is repeated for all the clusters obtained. The main steps are summarised in Fig. 5.
A greedy algorithm is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum . In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time. We design three greedy algorithms. For these greedy algorithms, efficiency of bins are important besides their demand and revenue. For bin B i the efficiency, e i , is equal to r i /d i . The greedy algorithms apply two priority queues
In this work we have shown that genetic programming can be used as a hyper-heuristic to generate reusable constructive heuristics for the multidimensional 0-1 knapsack problem. Our method is classified as a hyper-heuristic approach as it operates on a search space of heuristics rather than a search space of solutions. To the authors knowledge, this is the first time in literature a GP hyper-heuristic has been used to solve the mul- tidimensional 0-1 knapsack problem. This method has shown that automatically generated heuristics can be competitive with human-designed heuristics from the literature. Many methods make use of an add and (or) drop phase to either construct, improve or repair solutions. As future work, the rankings derived from the evolved heuristics can be used to define the order in which items are considered to be added or dropped. Many of the best results in the literature rely on knowledge gained from the LP-relaxed version of the multidimensional 0-1 knapsack problem. We intend to incorporate the optimal LP-relaxed results as part of a more comprehensive function set to attempt to improve the heuristics generated.
Abstract—Recently, several attempts have been made to creating 3D Geographical Information Systems on Web environment from DEM terrain data. However, most of them are facing the Terrain Splitting and Mapping problem which mainly caused by terrains’ sizes and their query capabilities. This problem is a challenge when we want to make ‘truly’ 3D WebGIS systems in equivalent to what have been represented in 2D ones. So far, an algorithm has been presented by Le Hoang Son et al  namely as SESA which serves for DEM terrain splitting with minimal memory space in each processor of a computing system. However, this algorithm has some limitations such as computing time and strategies to find solutions. In this paper, we will propose two novel algorithms based on Genetic Algorithm and Particle Swarm Optimization for this problem. The proposed algorithms will be evaluated and compared with SESA algorithm to show their efficiencies.
Junginer  who proposed a set of logic problems to solve multi-index transportation problems have also conducted a detailed investigation regarding the characteristics of multi-index transportation problem model. Rautman et al. , used multi-index transportation problem model to solve the shipping scheduling proposed that the employment of such transportation efficiency but also optimize the integral system. Hinojos  investigated a multi-period two-echelon multi-commodity capacitated plant location problem, the problem deals with a facility location problem where one desires to establish facilities at two different distribution levels by selecting the time periods. Linda and Steef  studied a multi-product lot-sizing model where, in any period, any portion of a reserved transportation capacity can be used in exchange for a guaranteed price. Purusotham and Murthy  presented a multi-product bulk transportation problem to minimize the total cost of the bulk transportation. Latha  presented three dimensional time minimization bulk transportation problem to minimize the total time of goods transportation. Guravaraju et al. , Ellwein and Gray  also studied different models on bulk transportation.