Top PDF Using swarm intelligence for distributed job scheduling on the grid

Using swarm intelligence for distributed job scheduling on the grid

Using swarm intelligence for distributed job scheduling on the grid

• Exhibits flexibility and robustness in response challenges. Among all their characteristics discussed, two of their aspects are of particular interest. First, they are robust. They function smoothly even though the colony may be continu- ously growing, or may suffer a sudden traumatic reduction in their numbers because of an accident, predation, experimental manipulation, or may spontaneously split into two dis- tinct colonies of half the size [30]. They routinely cope with gross and minor disturbances of habitat and with seasonal variations in food supply [31]. Second, they are tiny insects with no or very small memory and computational ability, yet they are surviving in our complex real world because of their huge number and adaptability to their environment. Using the idea of their robustness in the real world provides us with novel ideas to use in artificial life. For example, it gives us the ability to deal with the dynamic topology of todays networks as nodes may come and go arbitrarily; and being simple provides us with the efficiency we need in dealing with large scale systems. The application of swarm intelligence to network problems arises when a group of autonomous programs (agents) are working together. This is referred to as Ant Colony Optimization (ACO) or multi- agent systems. Each individual or program or autonomous module can be represented as an agent and these multi-agents can be used for network applications such as finding the shortest path, routing, load balancing, management, etc [32].
Show more

182 Read more

Using Swarm Intelligence for Distributed Job Scheduling on the Grid

Using Swarm Intelligence for Distributed Job Scheduling on the Grid

• Exhibits flexibility and robustness in response challenges. Among all their characteristics discussed, two of their aspects are of particular interest. First, they are robust. They function smoothly even though the colony may be continu- ously growing, or may suffer a sudden traumatic reduction in their numbers because of an accident, predation, experimental manipulation, or may spontaneously split into two dis- tinct colonies of half the size [30]. They routinely cope with gross and minor disturbances of habitat and with seasonal variations in food supply [31]. Second, they are tiny insects with no or very small memory and computational ability, yet they are surviving in our complex real world because of their huge number and adaptability to their environment. Using the idea of their robustness in the real world provides us with novel ideas to use in artificial life. For example, it gives us the ability to deal with the dynamic topology of todays networks as nodes may come and go arbitrarily; and being simple provides us with the efficiency we need in dealing with large scale systems. The application of swarm intelligence to network problems arises when a group of autonomous programs (agents) are working together. This is referred to as Ant Colony Optimization (ACO) or multi- agent systems. Each individual or program or autonomous module can be represented as an agent and these multi-agents can be used for network applications such as finding the shortest path, routing, load balancing, management, etc [32].
Show more

182 Read more

Modified particle swarm optimization for day-ahead distributed energy resources scheduling including vehicle-to-grid

Modified particle swarm optimization for day-ahead distributed energy resources scheduling including vehicle-to-grid

The introduction of V2G resources in the optimization problem represents new demands in terms of computational power requirements. The meta-heuristic PSO was modified to better suit the problem of Distributed Energy Resources (DER) optimal scheduling. A classic method, namely Mixed Integer Non-Linear Programming (MINLP), has been used for comparison purposes. The performance of the modified PSO was compared with MINLP using a case study considering a 33 bus distribution network and 2,000 gridable EVs. The performance of the modified PSO surpassed the MINLP execution time by a factor of 2,600 times with 35 seconds in PSO against 91,018 seconds (more than 25 hours) in MINLP. When compared to MINLP, the modified PSO presented only slightly worse solutions (a residual difference with a maximum of 0.55% in 100 trials). When compared with other variants, the modified PSO still managed to get better execution time and better solutions using the same case study. It is reasonable to conclude that the development of an application-specific PSO for the day-ahead DER scheduling proved its success in the comparison case studies.
Show more

140 Read more

Evaluation of Particle Swarm Optimization Applied to Grid Scheduling

Evaluation of Particle Swarm Optimization Applied to Grid Scheduling

To assess M PSO performance further, four more input instances were generated according to the ET C matrix model. These instances are described in Table VII. The same combi- nations of job and machine heterogeneity were used, but with larger values of m and n representing higher dimensional (and harder) problems. The results of the execution of the M PSO, GA, min-min, and A PSO algorithms for these new instances can be seen in Table VIII. Note that even though the M PSO algorithm has much better results than the GA and A PSO, the improvement over the min-min heuristic is small.

7 Read more

Grid Job Scheduling - A Detailed Study

Grid Job Scheduling - A Detailed Study

CASA is a decentralized dynamic heuristic metascheduling algorithm. In CASA, jobs can be rescheduled. In order to overcome the stagnation a probabilistic approach has been used to assign jobs sothat the jobs are evenly distributed to all other resources.CASA is a two phase algorithm [20]. The first phase is the job submission phase where each node receives the jobs that are submitted by local user. Consider a node A, it receives the job, it acts as a initiator node and requests all other nodes using the REQUEST message.The other nodes who are willing to take the job will reply through ACCEPT message. The node A will evaluate the other participating nodes using the historic data and selects the appropriate node and submits the job to it.The second phase is the dynamic rescheduling phase,the node which received the job will look for the job which has large enough waiting time and has not been selected recently in the local job queue. That job will be rescheduled to the other nodes.5 algorithms are discussed in CASA. They are:
Show more

5 Read more

Dynamic Cluster Scoring Job Scheduling algorithm for grid computing

Dynamic Cluster Scoring Job Scheduling algorithm for grid computing

Abstract— Grid is a parallel and distributed system, shares the resources among multiple administrative domains. Job scheduling is one of the major issues in the grid environment. Effective Job scheduling algorithm is proposed to overcome the scheduling problems so as to decrease the overall execution time and to achieve the efficient utilization of grid resources. Effective scheduling in grid environment can reduce the amount of data transferred between nodes by submitting a job to a node where most of the requested data files are available. In this paper we propose Cluster scoring job scheduling (CSJS) algorithm for the grid. It can decrease the overall makespan and increases the utilization of idle resources.
Show more

5 Read more

SURVEY ON JOB SCHEDULING MECHANISMS IN GRID ENVIRONMENT

SURVEY ON JOB SCHEDULING MECHANISMS IN GRID ENVIRONMENT

Grid computing (GC) is a new trend in distributed computing system (DCS) that enables the management of heterogeneous, geographically distributed and dynamically available resources in an efficient way (Kant Soni, et al, 2010). It expands the boundaries of what we perceived as distributed computing and supercomputing. Resource management and job scheduling are the most fundamental concerns when deploying grid infrastructure. Jobs could be defined as packages that are executed using appropriate computing elements (CE) at a point on the grid. Jobs may evaluate an expression, run a single or multiple commands to perform a given task, analyze data, or control of scientific equipment. The terms such as transactions, work entities, or submissions are used in the grid industry to mean the same thing as jobs. In whatever form, these Jobs need to be scheduled onto the grid environment prior to execution. However, the scheduling mechanisms that are typically deployed along with proprietary Grid Management Software (GMS) have limitations when the number of grid jobs is large (Jacob, et al, 2005). In recent years, researchers have proposed several efficient scheduling algorithms that are used in grid computing to allocate grid resources with a special emphasis on job scheduling (Abba, et al., 2012; Abdurrab and Xie, 2010; Anikode and Tang, 2011; Aparnaa and Kousalya, 2014; Chang, et al., 2008; Chen, et al., 2009; Coutinho, et al., 2014; Dang, et al., 2007; Elghirani, et al, 2007; Maheshbhai, 2011;
Show more

8 Read more

Job Scheduling in Grid Computing with Cuckoo Optimization Algorithm

Job Scheduling in Grid Computing with Cuckoo Optimization Algorithm

In order to test our proposed algorithm, we vary the number of jobs submitted to grid. Grid performance is shown in Table 1. In order to analyze the performance of job scheduling algorithm. we had an experiment with a small job scheduling problem. Figure 3 shows flow execution time algorithm with increase jobs in grid. Each experiment was repeated 00 times. Our proposed method have been compared with existing algorithms such as Genetic [17] and Particle Swarm Optimization [18]. In order to compare the best execution time obtained Table 2 is very practical.
Show more

6 Read more

Benefits of Global Grid Computing for Job Scheduling

Benefits of Global Grid Computing for Job Scheduling

The last configuration consists of all original machines. So this is the most realistic case as the simulation indicates how the scheduling result is improving dependent on the time zone. For the last resource configuration three different assumptions were used for the simulation. The first variant produces the scheduling result if all machines are in the same time zone. During the second simulation it is assumed that all machines are equally distributed. The last simulation uses the original time zones. Note, that for the third configuration all workloads are cut to span only 11 month caused by the smallest workload trace. This is necessary to assure that each site creates sufficient workload for the whole simulation. In addition, for each configuration scenario a simulation with- out job sharing has been executed (”Trace single”) which is used as a reference. All considered simulation scenarios are summarized in Table III.
Show more

6 Read more

Evaluation of Job-Scheduling Strategies for Grid Computing

Evaluation of Job-Scheduling Strategies for Grid Computing

2.3 Decentralized Scheduling In decentralized systems, distributed schedulers interact with each other and commit jobs to remote systems. No central instance is responsible for the job scheduling. Therefore, information about the state of all systems is not collected at a single point. Thus, the communication bottleneck of centralized scheduling is prevented which makes the system more scalable. Also, the failure of a single component will not affect the whole metasystem. This provides better fault- tolerance and reliability than available for centralized systems without fall-back or high-availability solutions.
Show more

12 Read more

Comparative Analysis Of Swarm Intelligence Optimization Techniques For Cloud Scheduling

Comparative Analysis Of Swarm Intelligence Optimization Techniques For Cloud Scheduling

Cloud computing is the development and commercial implementation of distributed computing. Workflow scheduling, which is an NP- hard optimization problem, is one of the crucial task in cloud environment. Many meta-heuristic algorithms have been proposed to schedule workflows in cloud. A good workflow scheduling strategy should adapt to the dynamic environment. Cloud computing focuses on user applications rather than academic and hence it is promoted by the business industry. Virtualized and elastic resources are offered to the end users. It has the potential to support full realization of ‘computing as a utility’ in the near future[1]. With the support of virtualization technology[2, 3], cloud platforms enable enterprises to lease computing power in the form of virtual machines to users. Because these users may use hundreds of thousands of virtual machines (VMs)[4], it is difficult to manually assign tasks to computing resources in clouds[5,6]. So, an efficient algorithm is needed for scheduling workflows in the cloud environment. Therefore, a dynamic task scheduling algorithm, such as Ant Colony Optimization (ACO)[8, 9], is appropriate for clouds.
Show more

5 Read more

An Effective Approach to Job Scheduling in Decentralized Grid Environment

An Effective Approach to Job Scheduling in Decentralized Grid Environment

A computational grid is emerging as a wide-scale distributed computing infrastructure where user jobs can be executed on either intra cluster or inter cluster computer systems [7,9]. Computational grids have the ability for solving large-scale scientific problems using heterogeneous and geographically distributed resources dynamically at run time depending on their availability, capability, performance, cost and quality of service requirements [6,8]. Resource management and job scheduling in grid environment based on clusters is one of the challenging task [5]. Scheduling is an important problem in computational grid [14]. The grid environment is dynamic in nature, in other words, the number of resources and jobs to be scheduled are usually variable. This kind of feature of grid makes the scheduling problem a complex optimization problem [15]. The effective utilization of grid is the efficient scheduling of jobs to the available resources.
Show more

5 Read more

Design and Evaluation of Job Scheduling Strategies for Grid Computing

Design and Evaluation of Job Scheduling Strategies for Grid Computing

Heterogeneous Substrate: The resources typically have its own local management software with different features. Hence, we have to cope with the limitations of the local management. For instance, some scheduling systems are non-deterministic in terms that they cannot provide any information about the expected completion time of a job. Unfortunately, this kind of information is important in distributed metasystems for planning future allocations. That means the grid scheduling system should utilize such features if supported but also cooperate with systems that do not. Nevertheless, the efficiency of a schedule system highly depends on the features of the lower-level scheduler. If a resource does not provide the requested features like a guaranteed completion time, it may not be suitable for some job requests. Flexible Resource Description: As job requirements and resources in a metasys- tem may vary according to type and application, there is need for ability to describe complex job requests. This request may be very specific if necessary or very broad. As an example, a user may not provide a very detailed request as he wants to get as many offers for resources as possible. More restrictive requirements would only reduce the possible resource sets for the job. Another user is looking for very spe- cific resources. He may have access to an alternative set of local resources for the execution of his job and is therefore only interested in a better resource allocation. Consequently, he formulates very detailed requirements and preferences. The grid scheduler should support both approaches. The individual user should be able to influence the resource selection and the scheduling to get best results.
Show more

162 Read more

Cooling-Efficient Job Scheduling in a Heterogeneous Grid Environment

Cooling-Efficient Job Scheduling in a Heterogeneous Grid Environment

Master-slave clustered architecture in Figure 1 was used for this experiment using the modified LSTRF-RRLCM scheduling algorithm. The master takes processes as input; sort processes with heavy burst time and allocate it to night time queue, while the processes with small burst time allocate to day time queue. If it is day time, the is set to distributes the processes in the day time queue on the cluster processors using an allocation strategy for parallel computation, if it is night time the processes in the night time are to be distributed on Cluster processors. Each queue is divided by the total number of processors (slaves). The resultant numbers of jobs are then distributed to each slave where the scheduling algorithm is being executed for computation.
Show more

12 Read more

GLOA: A New Job Scheduling Algorithm for Grid Computing

GLOA: A New Job Scheduling Algorithm for Grid Computing

With more applications looking for faster performance, makespan is the most important measurement that scheduling algorithms attempt to optimize. Makespan is the resource consumption time between the beginning of the first task and the completion of the last task in a job. The algorithm presented in this paper seeks to optimize makespan. Given the complexity and magnitude of the problem space, grid job scheduling is an NP-complete problem. Therefore, deterministic methods are not suitable for solving this problem. Although several deterministic algorithms such as min-min and max-min [4] have been proposed for grid job scheduling, it has been shown that heuristic algorithms provide better solutions. These algorithms include particle swarm
Show more

6 Read more

SURVEY OF JOB GROUPING BASED SCHEDULING IN GRID COMPUTING

SURVEY OF JOB GROUPING BASED SCHEDULING IN GRID COMPUTING

Abstract: Grid computing is a form of distributed computing that provides a platform for executing large-scale resource intensive applications on a number of heterogeneous computing systems across multiple administrative domains. Therefore, Grid platforms enable sharing, exchange, discovery, selection, and aggregation of distributed heterogeneous resources such as computers, databases and visualization devices. Job and resource scheduling is one of the key research area in grid computing. Job scheduling is used to schedule the user jobs to appropriate resources in grid environment. The goal of scheduling is that it achieves highest possible system throughput and match the application need with the available computing resources. In this paper, we will review the definition of grid computing, types of Grids, architecture of Grid computing, characteristics of Computational Grid and job grouping. Grid is a system in which machines are distributed across various organizations. It involves sharing of resources that are heterogeneous and geographically distributed to solve various complex problems and develop large scale applications. Grid computing is broad in its domain of application and raises research questions that span many areas of distributed computing and of computer science in general. In this paper, we will explain job grouping and resource scheduling algorithms that will benefit the researchers to carry out their further work in this area of research. Keywords: Grid computing, Job grouping, Job Scheduling
Show more

9 Read more

Memoir: A History based Prediction for Job Scheduling in Grid Computing

Memoir: A History based Prediction for Job Scheduling in Grid Computing

Punch [7] is a demand-based grid computing system that allows end users to transparently access and use globally distributed hardware and software resources. The resource management system in PUNCH has a pipelined architecture. With individual components in the pipeline replicated and geographical distributed for scalability and reliability. PUNCH employs a non-preemptive, decentralized, sender- initiated resource management framework.Scheduling is performed in a decentralized manner by resource and pool managers. A query manager parses the resource request, transforms it into an internal representation and forwards it to a resource manager based on some criteria. Individual Resource managers try to map the requests to a pool manager from the list of pool managers stored its local database. If a request cannot be mapped to any of the pools, a new pool is created. When a new pool manager is created, it queries a resource information database for information of all the resources which satisfy the pool’s criteria.Scheduling policy is extensible. Each pool manager has one or more scheduling processes associated with it. The function of these processes is to sort the machines in its pool cache according to some specified criteria (average load, available memory) and to process queries sent by resource managers. Pool managers can be configured to utilize different scheduling policies.
Show more

13 Read more

Job Scheduling in Grid Computing using User
          Deadline

Job Scheduling in Grid Computing using User Deadline

Grid technologies promise to tackle complex computational issues. Grid computing permits the virtualization of distributed computing and data resources such as processing, network information measure and storage capacity to form a single system image, granting users and applications seamless access to large IT capabilities. At its core, grid computing relies on associate degree open set of standards and protocols e.g., open Grid Services Architecture (OGSA) that change communication across heterogeneous, geographically distributed environments. When you deploy a grid, it will be to meet a collection of customer necessities. To raise match grid computing capabilities to those necessities, it is useful to keep in mind the reasons for victimization grid computing. This section describes the foremost vital capabilities of grid computing.
Show more

6 Read more

Job Scheduling in Grid Computing with Fast Artificial Fish Swarm Algorithm

Job Scheduling in Grid Computing with Fast Artificial Fish Swarm Algorithm

where θ (0, 1] is the crowd parameter, m is the number of individuals in the population and is the number of individuals in the “visual scope”. In the searching behavior phase, an individual is randomly chosen in the “visual scope” of and a movement towards it is carried out if it improves current location. Otherwise, the xi individual moves randomly. The swarming behavior is characterized by a movement towards the central point in the “visual scope” of . The swarming behavior is progressive stage that is activated only if the central point has a better function value than the current . Otherwise, the point follows the searching behavior. The chasing behavior presents a movement towards the point that has the last function value, . The swarm and chase behavior can be considered as local search. Leaping behavior solves the problem when the best objective function value in the population does not change for a certain number of iterations. In this case the algorithm selects random individual from the population. This process empowers algorithm for obtaining better results in solving numerous problems.
Show more

5 Read more

Swarm intelligence for scheduling: a review

Swarm intelligence for scheduling: a review

Pan et al. [29] proposes an ABC for a Flow - Shop scheduling problem presenting an improvement of the original ABC. In this work were considered different source of food not as a solution but as discrete job permutation and different neighbouring generation. Huang and Lin [49] presents an Open-Shop scheduling problem work “with an idle-time-based filtering scheme”, a system that can automatically adapt their behaviour stopping the search in solutions with insufficient fitness, decreasing “time–cost for the remaining partial solution time-cost“.
Show more

8 Read more

Show all 10000 documents...