Ant Colony Optimization (ACO) is a well established model-based search technique . ACO is a metaheuristic inspired by the foraging behavior of ants. Ants build their path to the goal by a probabilistic choice to move to a neighbor node. The probabilistic choice is based on pheromone de- posited by other ants and on the heuristic function. Then, ants move backward in a deterministic way, depositing pheromone in the graph. The amount of pheromone depends on the quality of solutions. Thus, artificial ants take two important roles of generating solutions and updating the parameters of the model. AntSystem (AS) was the first ACO algorithm applied to the traveling salesman problem (TSP) . As the AS did not achieve good performance when compared to state-of-art algorithms for TSP problem, other improvements of the AS, such as the Max-MinAntSystem (MMAS), were proposed . MMAS provides four main modifications to AS: (i) it exploits the best tours found, in which just the ant that finds the best global solution or the best solution of an iteration is allowed to deposit pheromone; (ii) it limits the possible range of pheromone trail values to an interval; (iii) the pheromone trails are initialized to the upper pheromone trail limit in order to increase the exploration of tours at the start of the search; (iv) pheromone trails are reinitialized each time the system approaches stagnation.
The heuristics for the TSP builds tours from base by adding an unvisited city in each phase relying on the path cost. However, the major issue of deploying local search heuristics is easily fall in local optima of algorithm. Most of the recent research for TSP focus on using advanced meta-heuristics such as Simulated Annealing , , Tabu Search , , Genetic Algorithm , , Ant Colony Optimization (ACO) , , Particle Swarm Optimization , , Neural Network , , Water Flow-Like Algorithm . For a long time, the GA and MMAS have been implemented successfully on various domains and TSP , . In specific, these algorithms have been employed for TSP with other methods such as hybrid and other heuristics -.
In this research, the ant colony optimization (ACO) algorithm is applied to find the shortest and collision free route in a grid network for robot pathplanning. Obstacles with various shapes and sizes are considered to simulate a dynamic environment. Computer simulation results demonstrate that the ACO algorithm can successfully re-route the optimal path for the new network after obstacles are added. Future works may include the investigation on different ACO algorithms, such as the Elitist AntSystem (EAS), the Rank-Based AntSystem, and the MAX-MINAntSystem (MMAS). Also, simulation works can be performed with more complicated networks and obstacles.
Fig. 3. Pheromone build-up allows ants to reestablish the shortest path. Since the ants are more inclined to choose a path with higher pheromone levels, the ants rapidly converge on the stronger pheromone trail, and thus divert more and more ants along the shorter path. This particular behavior of ant colonies has inspired the Ant Colony Optimization algorithm, in which a set of artificial ants co-operate to find solutions to a given optimization problem by depositing pheromone trails throughout the search space. Existing implementations of the algorithm deal exclusively with discrete search spaces, and have been demonstrated to reliably and efficiently solve a variety of combinatorial optimization problems. Table 1 gives brief overview, of the three most successful algoritms: antsystem (Dorigo 1992, Dorigo et al. 1991, 1996), ant colony system (ACS) (Dorigo & Gambardella 1997), and MAX-MINantsystem (MMAS) (Stützle & Hoos 2000). The historical order in which they were introduced
The MMAS model presents an exponential pheromone deposition approach to improve the performance of classical antsystemalgorithm which employs uniform deposition rule. A simplified analysis using differential equations is carried out to study the stability of basic antsystem dynamics with both exponential and constant deposition rules. A roadmap of connected cities, where the shortest path between two specified cities are to be found out, is taken as a platform to compare max-minantsystem model (an improved and popular model of antsystemalgorithm) with exponential and constant deposition rules. Extensive simulations are performed to find the best parameter settings for non-uniform deposition approach and experiments with these parameter settings revealed that the above approach outstripped the traditional one by a large extent in terms of both solution quality and convergence time.
In this paper we investigate the application of MMAS algorithm to the exploratorypathplanning task in dynamic environments, named Max-MinAntSystem for Dynamic PathPlanning (MMAS- DPP). The main contribution of this work is that robots do not have information about the environment beforehand. They collect information from the environment as they explore it. Furthermore, we analyze the cost to obtain the solution based on the distance traveled by ants, which represents the overall cost that ants are required to travel during the execution of the algorithm—e.g., the total distance traveled by all ants to explore the environment in order to find a solution. The current study extends the investiga- tion made in , in which an application of MMAS algorithm was proposed to the exploratorypathplanning problem in static environments by considering a dynamic environment where routes become blocked or free over time and the goal position changes. The results show that the proposed MMAS-DPP algorithm is robust to cope with these dynamic situations.
higher chance of selection in future iterations. This pheromone updating rule is of a highly explorative nature. The exploitation, on the other hand, is only reected in Equation 4, where the pheromone changes caused by better solutions are calculated to be higher than other solutions. The experience shows, however, that the exploitation introduced into the method by Equation 4 is not enough to balance the exploration present in the algorithm. This is usually reected in slower convergence of the method or convergence to the sub-optimal solutions depending on the value of the evaporation factor used. Dierent methods are suggested to regulate a trade-o between the exploitation of the best solutions (iteration-best and global-best) and further exploration of the solution space. Dorigo and Gambardella  presented the Ant Colony System (ACS), which includes additional rules that probabilistically determine whether an ant is to act in an exploitative or explorative manner at each decision point. Another mechanism used within ACS is the local updating of the pheromone of the ant's selected options immediately after it has generated its solution, such that the reselection of options within an iteration is discouraged, leading to further exploration of the method. The global updating rule in ACS is similar to that in AS, but in ACS, only the path with the global-best solution receives additional pheromone. This updating rule clearly acts as an encouragement for exploitation, as only the best solution is reinforced with additional pheromone. To exploit information about the global-best solution, Dorigo et al.  proposed the use of an algorithm known as the Elitist AntSystem (AS elite ). The updating rule in AS elite is the same as
Maxminantsystem is the successor to the main ACO based algorithmantsystem. Many modification are made in MMAS with respect to AS. Firstly MMAS exploits only the best tour found in iteration or the best so far tour. That means the ant is allowed to update the pheromone content on the best so far tour or the best tour in iteration. This may lead to stagnation that is the ant may follow the same path in all iteration which may lead to the suboptimal solution because of the excessive pheromone deposition on the best tour till now which may not be optimal. To overcome this second modification is done to limit the maximum and minimum range of the pheromone content. The pheromone content lies between the range [τ min ,τ max ] . Third modification is that the
Ant Systems (AS) were first proposed in Dorigo  as an attempt to use the ant foraging behavior as a source of inspiration for the development of new search and optimization techniques. By using the pheromone trail as a reinforcement signal for the choice of which path to follow, ants tend to find “minimal” routes from the nest to the food source. The system is based on the fact that ants, while foraging, deposit a chemical substance, known as pheromone, on the path they use to go from the food source to the nest. The standard system was later extended in Dorigo and Di Caro , giving rise to the so- called MaxMinAntSystem (MMAS). The main purpose of the max-min version is to improve the search capabil- ity of the standard algorithm by combining exploitation with exploration of the search space, and by imposing
This paper deals with software project planning, especially scheduling problem. Based on the work of Broderick Crawford , R.F. Tavares Neto , we presents an optimized resolution which consists in improving Max – MinAntSystemalgorithm. Using this algorithm to solve the Software Project Scheduling Problem, conducting experiments to show how to build project Critical Path Method (CPM) graph, Gantt Chart and Cost baseline.
By a lot of research and study; biologists have found that ants leave some chemical substance on the paths that were traversed by them, which is called pheromone, and the quantity of pheromone is inversely proportional to the length of the route. It was also mentioned in Dorigo’s research in 1992 that the ants can also perceive the pheromone when they pass the path, and their actions could be influenced by the concentration of pheromone. However, the amount of the pheromone left may vary depending on the quality and quantity of the food. Then, other ants now can choose the route that has denser pheromone (which can be treated as a better route) and guides them to the food source. This behavior will also cause the formerly better path to becoming even much better by aggregating more pheromones. This behavior of the ants is depicted in Figure 2.2.
one queue per session, and using a round robin scheduler. However, no max-min fair rate was explicitly calculated. When ATM networks appeared, several distributed algorithms were proposed to calculate virtual circuit max-min fair rates in the Available Bit Rate (ABR) traffic mode , , , , , , . These algorithms calculate the max-min fair rates using the ATM special Resource Management (RM) cells, and so, router links are in charge of executing the max-min fair algorithm. Charny et al.  seem to have been the first to analytically prove the correctness of their proposed algorithm. Hou et al.  generalized this algorithm to extend the max- min fairness criterion with minimum rate requests and peak rate constraints. A problem of the algorithm in  (when pseudo-saturated links appear) was identified and documented by Tsai and Kim . It is worth noting that all the distributed algorithms mentioned above need per-session state information at the routers. A distributed algorithm that only uses constant state information in each router to exactly compute the max- min fair rates has been proposed in , but it requires strong synchronous behavior. Several max-min fair algorithms have been proposed that compute an approximation of the rates , , . Unfortunately, max-min fair rates are sensitive to small changes and, hence, an approximation with a small difference from the optimal allocation in one session can be drastically amplified at another session .
The first set of experiments simulates the relationship between the total cost of the network and the number of network nodes (The network with more than 10 nodes is considered in the simulation), as shown in Figure 2. We can see that, because of the capacity value and cost value is randomly generated, so the total cost of the network is not necessarily with the number of the nodes increases, but by comparing the improved algorithm and Ford-Fulkerson algorithm we can see that the improved algorithm has less total cost than the Ford-Fulkerson algorithm in the case of maximum network flow. This is because the Ford-Fulkerson algorithm is redundant when implementing the maximum flow, so there is a situation where the maximum flow can't be found accurately and the cost of maximum flow is not necessarily minimal.
Future broadband wireless systems will support a wide range of multimedia applications for mobile users. However, to maximize user experience, bandwidth provisioning is critical. In this paper, a novel bandwidth provisioning scheme for broadband wireless network is proposed. The proposed scheme allows for prioritized bandwidth provisioning to different classes of traffic for support of multiple connections with different bandwidth requirements. It also incorporates a unique opportunity cost function to bound the cost of allocating bandwidth to different classes so as to maintain certain revenue levels to the service provider. Simulation results reveal the presented 2tMMFS-DBA algorithm efficiently allocates bandwidth for improved urgent multimedia requirements and provides higher QoS guarantees in IEEE 802.16e networks, with overall enhanced system throughput.
Cloud Computing is the use of computing resources (Hardware and Software) that are delivered as a service over a network (typically the internet). It supplies a high performance computing based on protocols which allow shared computation and storage over long distances. In cloud computing, there are many tasks requires to be executed by the available resources to achieve best performance, minimal total time for completion, shortest response time, utilization of resources etc. Because of these different intentions, we need to design, develop, propose a scheduling algorithm to outperform appropriate allocation map of tasks on resources. A unique modification of Improved Max-min task scheduling algorithm is proposed. The algorithm is built based on comprehensive study of the impact of Improved Max-min task scheduling algorithm in cloud computing. Improved Max-min is based on the expected execution time instead of completion time as a selection basis. Enhanced (Proposed) Max-min is also based on the expected execution time instead of completion time as a selection basis but the only difference is that Improved Max-minalgorithm assign task with Maximum execution time (Largest Task) to resource produces Minimum completion time (Slowest Resource) while Enhanced Max-min assign task with average execution time (average or Nearest greater than average Task) to resource produces Minimum completion time (Slowest Resource).
This work presents a new proposal to solve the problem of pathplanning for mobile robots; it is based on TWPSS-ACOA to find the best route according to certain cost function. There are two core steps about pathplanning methods: environment modeling and planningalgorithm. So we firstly build the environment model of robot by EVB-CT--GM  , since this method was certified to be an effective method to overcome environment traps problem, and then TWPSS-ACOA proposed in  is used to adequately develop mutual cooperation ability between ants. But in view of that the TWPSS-ACOA has the defects of losing some feasible paths and even optimal paths, a new ants meeting judgment method that can judge if ants meet according to the kind of pheromones is proposed. And then a new method, which rationally distributes initial pheromone of ants, is given to speed up the convergence velocity of initial stages of ACO algorithm, since equivalent Manuscript received October 14, 2011.
reduce the cost of execution along with satisfy the user objective function deadline. This algorithm partial critical path is recursively schedule. (Abrishami, S. et al., 2013) presents the workflow scheduling algorithms first one is called IaaS Cloud Partial Critical Paths (IC-PCP) which is one phase algorithm, and second one is called IaaS Cloud Partial Critical Paths with Deadline Distribution (IC-PCPD2).These algorithms consider the main features like on demand resource providing and pay per use model of the current commercial Clouds. (Yu, J. et al., 2008) proposed a genetic approach based workflow scheduling algorithm which considering two QoS constraints Budget and Deadline. Proposed genetic algorithm schedule the workflow application either to minimize the Makespan while meeting user imposed deadline constraints or minimize the execution cost while meeting user imposed budget constraints. The performance of proposed genetic algorithm is compared with the existing genetic algorithms. This algorithm mainly consider heterogeneous environment and provide the solution for deadline and budget optimization problem. (Jayadivya et al., 2012) proposed a workflow scheduling algorithm which provides fault Tolerance. Fault tolerance is achieved by using resubmission and replication of tasks. The resubmission and replication of tasks depends upon the metric which is achieved by resubmission factor and replication factor. Priority of tasks is defined on the basis of the criticality of the task which is calculated by using parameters resubmission impact factor, earliest deadline. (Bittencourt et al., 2011) proposed scheduling algorithm for Hybrid Cloud which optimizes the cost of resources. This algorithm chooses which resources should be leased from the public cloud and aggregated to the private cloud to provide enough processing power to execute a workflow within a given execution time. Design algorithm can reduce costs while achieving the specified desired execution time. (Van den Bossche et al., 2010) consider the problem of outsourced tasks at the time of heavy load from an internal data center to a cloud provider. The main objectives this algorithm to increase the utilization of the internal data center and to minimize the cost of running the outsourced tasks in the cloud while satisfy the required QoS constraints. They consider this optimization problem in a multi-provider hybrid cloud setting with deadline-constrained and preemptible but non-provider- migratable workloads that are characterized by memory, CPU and data transmission requirements.
the generation of cluster heads and the number of cluster heads in the protocol. The nodes in the network are randomly elected as cluster heads, and the energy consumption of the nodes is not considered. The cluster heads which are far away from the base station will die early because of the premature depletion of energy. In view of the above shortcomings, Ma et al. (2013) proposed LEACH-C, and LEACH- C made some improvements to LEACH. The cluster structure of the LEACH-C protocol was generated by the base station through simulated annealing algorithm . Since the cluster heads selected by the LEACH-C protocol send information in the form of one or more hops to the base station, this will result in the problem that the cluster heads near the base station will receive and forward the data too much, which will consume more energy and shorten the lifetime. The hybrid, energy-efficient distributed clustering protocol determines the radius of each cluster. The basis for the protocol to select the cluster head set is the residual energy of nodes and followed by the density of nodes. For the main basis, the node with more remaining energy is more likely to be elected as the cluster head of this round; for the secondary basis, that is the degree of proximity between the nodes or the density of the nodes in the network. For cluster heads in the same cluster coverage area, they have the same rights to run as the final cluster heads through the average reachable parameter (AMRP). For cluster members that exist in a number of cluster coverage areas, different strategies are applied. They refer to sub parameter AMRP to add the final cluster. The LEACH protocol has another disadvantage that the protocol neither calculate nor refer to the residual energy of each sensor node when the cluster head set is elected. If the nodes with little remaining energy are selected as cluster heads, the energy consumption will be greatly increased, the energy will be exhausted too early, and the operation of the whole network will be greatly affected. In view of this defect, Sun and Jian-Zhong (2013) took the remaining energy of the node into consideration when selecting the cluster head, and solved the defect of the LEACH protocol .
cycles are completed nearly instantly, and the interactions of Probe and Response packets with packets from other ses- sions are only produced when a large number of sessions are present in the network. Secondly, in what we call WAN scenario, all links except host to router links have been assigned a propagation time generated uniformly at random in the range of 1 to 10 milliseconds. All the links between hosts and routers are assigned 1 microsecond of propagation time. This second scenario has a resemblance with an Internet topology where the propagation times in the internal network links are in the range of a typical WAN link. In this kind of networks, Probe cycles are completed more slowly and more interactions with packets from other sessions are potentially produced than in the LAN scenario. In the experiments, sessions have been created by choosing a source and a destination node, uniformly at random among all the network hosts. A session path is a shortest path from its source to its destination node. In order to check the correctness of the results obtained when executing B-Neck, we have programmed Centralized B-Neck (Figure 1), and so, every B-Neck execution result (the maxmin fair rate assignment to each session) has been successfully validated against the result obtained when executing the centralized version with the same input data. We have designed three different experiments, that we call Experiment 1, Experiment 2 and Experiment 3.
natorial oracle takes most of the total run-time, while the dual oracle runs less than a tenth of a second for all instance sizes. This is because instead of the quadratic problem here CPLEX has to solve a linear problem which can be done more efficiently. For comparison, we also performed experiments us- ing the formulation of Theorem 3.15. It turned out however that CPLEX was not able to solve the corresponding problem within hours even for n = 20. To obtain some insight into the typical structure of an optimal solution com- puted by Algorithm 2, we picked one instance with ellipsoidal uncertainty and counted in how many of the computed solutions a given object is used. The result is shown in Figure 6.1, where objects are sorted (from left to right) by the number of appearances. It turns out that more than half of the objects are never used, about one fifth is used in every computed solution, while only the remaining objects are used in a non-empty proper subset of the solutions. Similar pictures are obtained when considering other instances.