5.3 Future Work
In the future, we plan to validate the approach after adding the capability of considering a larger number of priority factors simultaneously. A considerable part of adding more pri- ority factors is in the formula which also has the potential to be modified in the future. We plan to modify the formula so it is more complex and supports more priority factors that the ones we proposed at this time. Increasing the number of priority factors and modifying the formula means more accurate priority scores for the tasks and therefore, more effective pri- ority scheduling and more efficient resourceutilization. Adding more priority factors will also allow cloud service providers to further customize the proposed scheduling approach to better fit their needs. Some priority factors that we did not consider at this time but might consider in the future include user preference on the geolocation of execution and usertask submission history.
The rapid growth in demand for computational power driven by modern service applications combined with the shift to the Cloud computing model have led to the establishment of largescale virtualized data centers. Such data centers consume enormous amounts of electrical energy resulting in high operating costs and carbon dioxide emissions. Dynamic consolidation of virtual machines (VMs) using live migration and switching idle nodes to the sleep mode allow Cloud providers to optimize resource usage and reduce energy consumption. However, the obligation of providing high quality of service to customers leads to the necessity in dealing with the energyperformance tradeoff, as aggressive consolidation may lead to performance degradation. Due to the variability of workloads experienced by modern applications, the VM placement should be optimized continuously in an online manner. Efficient taskscheduling mechanism can meet users' requirements, and improve the resourceutilization, thereby enhancing the overall performance of the cloud computing environment. A scheduling algorithm which takes into account maintaining quality of service availed to the users, as well as, optimally reducing energy consumption and increasing efficient resourceutilization is proposed and experimented with, on a simulation environment.
Load balancing is a technique that distributes the excess dynamic local workload evenly across all the nodes. It is used for achieving a better service provisioning and resourceutilization ratio, hence improving the overall performance of the system Incoming tasks are coming from different location are received by the load balancer and then distributed to the data center ,for the proper load distribution. The aim of our project is as follows: To increase the availability of services, To increase the, user satisfaction, To maximize resourceutilization. To reduce the execution time and waiting time of task coming from different location. To improve the performance, Maintain system stability, Build fault tolerance system, Accommodate future modification, Avoid overloading of virtual machine. With the demand in Cloud Computing industry, the cloud service providers attracts customers
Cloud computing is regarded as the latest development in IT which can accommodate these needs. The rationales for using cloud computing are maximum efficiency and minimum cost. Among cloud computing challenges, server cost in relation to virtual machine can be considered as a significant issue.
In this paper, server cost has been examined as the research problem; more precisely, scheduling virtual machines in cloud computing has been investigated. The advantages of using common scheduling methods are that significant parameters such as energy consumption optimization, migration time minimization, response time reduction, resourceutilization enhancement and system efficiency improvement have been addressed and enhanced. These parameters have been extensively studied and optimized in the literature related to cloud computing. However, it should be pointed out that host machine cost in relation to service providers is another important parameter which has been mainly ignored and under-researched. To address this under-researched issue in this paper, the researchers have proposed a novel scheduling algorithm for cloud computing setting which is based on server cost. The merits of scheduling algorithm proposed here are to enhance income and resourceutilization. The results of implementation revealed that the proposed scheduling algorithm improved income and resourceutilization for 8% and 3 %, respectively, when compared with the basic method. Nevertheless, it should be pointed out that minor reduction of resourceutilization in comparison with the rotational shift method is the partial drawback of the proposed method which should be addressed in future studies.
Department of Computer Science & Engineering Surbhi College of Engineering & Technology, Bhopal, India
Abstract:- Cloud computing evolve as a new technology in the field of IT and growing so much faster due to attractive feature like easy to use, dynamic allocation and reallocation of the resources, less costly etc. It provides on demand resources to the client on the rent basis. Cloud support for the utility model, so user has to pay only for the use resources. Since it provide resource to the users and demand for the resources increasing very fast in the past few decades. So load balancing is the main requirement of the cloud system. But load balancing in cloud is more difficult as compare to other technology because it is so large and user requirement can be change dynamically. It helps in optimizing the resourceutilization, hence enhancing the system performance. The prime goal of any load balancing approach is to maximize the resourceutilization and reducing the number of active server which will further reduce energy consumption and carbon emission. During the past decades several load balancing approach have been proposed. Main objective of these approaches is to increase the system performance by reducing the number of migration. But these approaches are not focusing on the resource wastage. This paper proposed a load
Cloud computing provides computing resources to the cloud on demand based and the concept is pay per use”.
Cloud computing mainly focused on optimistic resourceutilization in fewer cost efforts. Now, these days cloud computing technology are utilized by most of the IT companies and business organizations. It increases number cloud users as well as computing resources which creates challenges for cloud service providers to maintain optimum utilization of computing resources. Taskscheduling methods play an important role in cloud computing. A scheduling machine helps in allocation of the virtual machine to a usertask and to maintain the balancing between machine capacity and total task load. Different taskscheduling methods are suggested by cloud researchers. In this research work, we are presenting a hybrid ACHBDF (Ant colony, Honey bee with dynamic feedback) load balancing method for optimum resourceutilization in cloud computing. The proposed ACHBDF method uses the combined strategy of two dynamic scheduling methods with a dynamic time step feedback method. Proposed ACHBDF utilizes the quality of ant colony method and Honeybee method inefficient taskscheduling. Here feedback strategy helps to check system load after each phenomenon in dynamic feedback table. This helps in migration of task more efficiently in less time. An experimental analysis in between existing ant colony optimization, honey bee method and Proposed ACHBDF clearly shows that proposed ACHBDF performs outstanding over existing method.
To reduce request rejection rate between consumer and provider, and increase resourceutilization on cloud provider side, Starvation-Removal and AR-to-BE Conversion algorithms are necessary. Proposed Starvation-Removal algorithm applies technique that provides a maximum limit a lease can be suspended considering constraints’ flexibilities to maximize the chances of their acceptance. Using proposed algorithms, consumers will get suitable lease and their allowable suspension according to their needs. AR-to-BE Conversion algorithm will reduce consumers’ efforts to wait for exact time of lease execution and checking weather lease is provisioned or rejected at all. These algorithms will not handle the situations when system has multiple requests of same type of lease for a single slot and therefore, it will just follow first in first out queue to handle them as proposed in Haizea.
the latter being controllable due to parameter configuration. The analysis of a production Cloud computing system demonstrates that it is possible to convert real task patterns into boxes applicable for theoretical scheduling. Specifically, we are capable of capturing deviation in task execution within multi-tenant environments and introduction of new tasks automatically. However, it is worth highlighting that although there exists dynamic change in resourceutilization patterns over the month period, task patterns are observed to be relatively stable per individual task as analyzed in , and that the CPU utilization is a five minute aggregate, resulting in increased algorithm accuracy. As a result, it is necessary to study resource boxing at much higher fidelity of resourceutilization patterns.
It is a process of reassigning the total load to the individual nodes of the collective system to make resourceutilization effective and to improve the response time of the job, simultaneously removing a condition in which some of the nodes are over loaded while some others are under loaded. A load balancing algorithm which is dynamic in nature does not consider the previous state or behavior of the system, that is, it depends on the present behavior of the system. The important things to consider while developing such algorithm are : estimation of load, comparison of load, stability of different system, performance of system, interaction between the nodes, nature of work to be transferred, selecting of nodes and many other ones. This load considered can be in terms of CPU load, amount of memory used, delay or Network load.
Cloud based applications suffer from the latency problem existing in the network. The current technology is based on live migration of virtual machines for proper resource allocation to jobs. But, the migrations add overhead to the network. The common approach to handle this problem is to develop some possible schedules and choose the most suitable one which would minimize migration and maximize resourceutilization. Yet another requirement is that jobs are to be completed within user specified deadlines. Thus, the present study focuses on developing taskscheduling strategies that can ensure completion of tasks within deadlines and optimal resourceutilization through load balancing in order to minimize migration of tasks.
Cloud computing provides a pay as you go model in which the user has to pay for the services he uses. However one of the major challenges in cloud computing is related to optimizing the resources being allocated. Because of the uniqueness of the model, resource allocation should be performed with the objective of minimizing the costs associated with it. This optimized use of cloud can only be done by efficient and effective algorithm to select the best resources. In this paper, the Task Based allocation of resources is used to minimize the makespan of the cloud system and also to increase the resourceutilization. The simulation is done using CloudSim and results show that TBA algorithm reduces the makespan, execution time and cost as compared to Random Algorithm and FCFS algorithm.
Bangalore Institute of Technology, Bengaluru, Karnataka, India.
In cloud computing users tasks come up with varied resource demands. But the resource planned is always higher than the actual requirement for the successful execution of a task. The majority of tasks may not consume the entire amount of resource allocated for its execution, thus leading to improper resourceutilization and load imbalance thus experiencing high cloud maintenance costs. One way to address this issue is by having prior knowledge of resource requirements and characterizing the incoming tasks based on the resource requirement for efficient use of resources. Hence, the task classification model is proposed, which analyses the incoming tasks and categorizes them into different clusters based on workload using fuzzy clustering algorithm. Furthermore depending on the tasks’ CPU and memory requirement the clustered tasks are buffered as light, heavy, compute-intensive, and memory-intensive which benefits during the scheduling and allocation process. The result of the clustering is used in taskscheduling and estimation of the actual resource required for successful task execution. The experimental results are compared with existing clustering algorithms and the proposed method proves to achieve increased resource savings.
Where E avg , E max , E min represent the average execution time of n virtual machines, the maximum execution time of virtual machines in n virtual machines, and the shortest execution time.
Load j is inversely proportional to E j and decreases as E j increases. When E j < E avg , the value of Load j is greater than 1, indicating that the resources on the virtual machine VM j are free, and the task may be preferentially allocated to the VM j to improve resourceutilization. When E j > E avg , the value of Load j is less than 1, indicating that this virtual machine is overload. If you continue to allocate tasks to VM j , it may cause load imbalance and reduce the system operating efficiency.
Index Terms - Meta-task set, Min-Min algorithm, priority, Resource allocation, Cost, Makespan.
Computing in the Cloud is defined by a collection of resources that are used to compute and communicate sited in disseminated data centres that is distributed among various clients . Scheduling is the main challenge in cloud environment. Several parameters like makespan, resourceutilization, fault tolerance, load balancing, energy efficiency, cost, deadline, priority are used in taskscheduling .
Cloud computing is a rapidly emerging paradigm in this very new era of technology. Basically, cloud is a cluster of distributed and interlinked servers providing on-demand services to customers. Broadly, it offers software-as-a- service(SAAS), platform-as-a-service(PAAS), infrastructure-as-a-service(IAAS).Here we are focusing on IAAS cloud system which offers computational resources to remote customers in the form of leases. Here we are defining real-time or online optimized scheduling of requests of various resources arriving simultaneously at data center of IAAS cloud service provider. Being more practical, our algorithm is providing best resourceutilization and better results in terms of execution time as compared to DFPT algorithm for taskscheduling in cloud computing which do not considers dependency between tasks(requests). Our algorithm proposes dynamic task allocation on IAAS clouds and resourcescheduling by utilizing the updated status of various Virtual machines available at real time. We have simulated this experiment using CloudSim toolkit. Surely, there is very beneficial improvement in results as compared to default FCFS scheduling and other available scheduling algorithms.
International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-9 Issue-2, December 2019
Abstract: Cloud Computing provides the sharing ability and access for available cloud host and various distributed environments, namely Load Balancing (LB), virtualization technologies and scheduling techniques. The satisfaction of both users and cloud providers are the major issues for effective LB and taskscheduling algorithms in cloudresource management, where the requirements namely high resourceutilization, low monetary costs and minimum makespan. Many researchers tried to develop various heuristic and meta-heuristic algorithms to attain the aforementioned user requirements. But, when the number of tasks grows exponentially, these algorithms failed to achieve LB, lower running time, and it faces the high time complexity. In this research work, a KD-Tree algorithm is developed to address the issues of heuristic algorithms and provide efficient LB by partitioning the environments into several tasks. According to the deadline of task execution, the remaining tasks are adjusted dynamically by the proposed KD-tree algorithm in the virtual environment. The experiments are conducted to evaluate the efficiency of KD-Tree algorithm with existing heuristic techniques by using makespan, energy consumption and task migrations. When the number of tasks is 20, the proposed KD-Tree algorithm achieved 71.33% makespan and 5% task migrations.
Keywords: Cloud Computing, Load Balancing, Raven Roosting Optimization Algorithm (RRO), TaskScheduling.
Dynamic extensibility, on demand, remotely access of services make cloud computing on fast track today’s technology. For best performance we require effective scheduling of jobs thus taskscheduling performed in cloud computing. Efficient Taskscheduling is fulfillment of objective function that can be completion of tasks in less response time, less makespan, less cost, make resourceutilization such that constraints are need to fulfill for solving many scheduling problems like load balancing, energy efficiency.
dynamic types. All are use based on requirement. Static algorithms most useful for same type request come every day. If run time resource required based on need than dynamic algorithms used. According to the literature review various author have research based on taskscheduling, response time etc. In the model based method to predicate and calculate the resource requirement of each virtual machine. In this algorithms only consider CPU & memory two parameter. We research a predicted load balancing technique for resourcescheduling in cloud consider CPU, memory, disk i/o, VPC & region. so we will meet a more accurate result.
----------------------------------------------------------------------ABSTRACT----------------------------------------------------------- Workflow scheduling is a challenging field in computing in which tasks are scheduled according to the user requirement and it becomes costly due to the quality of service demand by the user. Cloud environment has been deployed for this work so as to reduce the overall cost. To maintain & utilize resources in the cloud computing scheduling mechanism is needed. Many algorithms and protocols are used to manage the parallel jobs and resources which are used to enhance the performance of the CPU in the cloud environment. Particles swarm Optimization (PSO) and Grey Wolf Optimization (GWO) are used for effective scheduling. This work is based on the optimization of Total execution time and total execution cost. The results of the proposed approach are found to be effective in compare to existing methods. The particle swarm optimization is initialized by using Pareto distribution. TET and TEC illustrated the minimized cost and time by using the GWO to converge the decision of virtual machine. Thus the work concludes that GWO performs better in compare to existing BAT algorithm.