Abstract: Cloud computing is developing area of computing which provides Internet-based services like shared resources, information, software packages and other resources as per client requirements at a specific time. Cloud operations are performed by three types of services: Software as a Service, Platform as a Service and Infrastructure as a Service. Clouds are deployed based on the four types: public, private, hybrid and community. Cloud Computing has two main fields Load Balancing and Scheduling in which there is a vast scope of research. As users are increasing day by day, there is a need to schedule the tasks efficiently as per user’s requirement. In this paper, our focus is on task scheduling. Some of the basic algorithms are FCFS, RR, Priority-based, Min-min, Max-min for scheduling of tasks.
It is the responsibility of cloud resource manager to optimally dispatch tasks to the cloud resources. Various schedulingalgorithms are available for cloudenvironment. The main task of cloudschedulingalgorithms is to minimize the total completion times of tasks byfinding the most suitable resources to be allocated to the tasks. However, minimizing the overallcompletion time of tasks may not necessarily result in minimization of execution time of each individual task. The main objective of this paper is to review various schedulingalgorithms in cloudenvironment.
In 2014, A. O. Joseph, et.al  did research on cloud security technique. The research is found significant for data protection over cloudenvironment. They provided review over cloud security techniques for data security. Research paper is considering different protection techniques. These techniques have been given in enterprises. Techniques have make discussion of some general protection techniques. These protection techniques consist of authentication and authorization. Research has also considered encryption and control mechanism for accessing private data.
Abstract: Cloud computing is a Pay-Per-Use model which provides services and various resources to the users in an efficient way using the internet without involving much economic investment. Due to this low cost involvement and also simple implementations CC has found its applications in so many areas. It is uniquely enriched with various features like heterogeneous, flexible, distributed, location independent, on demand self service and universal network access. Gaining popularity due to its indifferent quality of being operable even on underlying physical infrastructure i.e. it does not require any special infrastructure to use the services and resources from the cloud. Cloud Computing works on virtualisation. The user does not have any information and details about the physical infrastructure of the service provider such as location, platform where the function is running etc. Therefore for load balancing of such systems proper arrangement of the operations is required. And for this purpose Scheduling came into existence. Scheduling in CC is performed at two levels-1. Host Level (VM Scheduling- Allocation of PEs to the Hosts) 2. User level (Cloudlet Scheduling-Allocation of cloudlets to VM for execution). However in this paper we are proposing a new algorithm for allocation of PEs to VMs. We have proposed a better output of the algorithm by using an optimisation technique - Ant Colony Optimisation.
Praveen Gupta et. al. (2010)  described Cloud computing has come out to be an interesting and beneficial way of changing the whole computing world. In this paper, we deal with the various methodologies adopted to handle all the processes and jobs concurrently executing and waiting into the web application and web server housed into the same system or different systems. Also, these different methods will be compared taking into account the same number of jobs, but varied environmental conditions and hence, the result would be formulated. Various issues like virtual resources, queuing strategies, resource managers etc. has been discussed here apart from the main coverage points. All these
Sushil Kumar Saroj, Aravendra Kumar Sharma (2016)  described that “CPUscheduling has significant contribution in efficient utilization of computer resources and increases the system performance by switching the CPU among the various processes. However, it also introduces some problems such as starvation, large average waiting time, turnaround time and its practical implementation. Many CPUschedulingalgorithms are given to resolve these problems but they are lacked in some ways. Most of the given algorithms tried to resolve one problem but lead to others. To remove these problems, we introduce an approach that uses both average and variable time quantum. In this approach, some processes are served with average time quantum and others with variable time quantum. This approach not only provides the minimum average waiting time and turnaround time but also try to prevent the starvation problem”.
Swachil Patel, Upendra Bhoi (2013)  described that “in cloud computing, there are many jobs requires to be executed by the available resources to achieve best performance, minimal total time for completion, shortest response time, utilization of resource usage and etc. Because of these different objectives and high performance of computing environment, we need to design, develop, propose a scheduling algorithm to outperform appropriate allocation map of jobs due to different factors. In job scheduling priority is the biggest issue because some jobs need to scheduled first then the other jobs which can wait for a long time. In this paper, a systematic review of various priority based job schedulingalgorithms is presented. These algorithms have different perspective, working principles etc. This study concludes that all the existing techniques mainly focus on priority of jobs and reduces service response time and improving performance etc. There are many parameters that can be mentioned as factor of scheduling problem to be considered such as load balancing, system throughput, service reliability, service cost, service utilization and so forth”.
Akilandeswari. P and H. Srimathi (2016)  described “Cloud computing was utility based environment as pay per use model achieved by Parallel, Distributed and Cluster computing accessed through the Internet. A key advantage of cloud computing is on- demand self-service, scalability, and elasticity. In on- demand self-service, the cloud user can request, deploy their own software, customize and pay for their own services. Scalability is achieved through virtualization. Being elastic in nature, cloud service gives the infinite computing resources (CPU, Memory, Storage).In cloudenvironment to achieve the quality of service many schedulingalgorithms are available, but the scalability of task execution increases, scheduling becomes more complex. So there is a need for better scheduling. This paper deals with the survey of dynamic scheduling, different classification and schedulingalgorithms currently used in cloud providers”.
Scheduling is one of the major issues in the management of application execution in cloudenvironment. Surveyed the various accessible schedulingalgorithms in cloud computing. Also noticed that disk space management is critical in virtual environments. The heuristic based strategy is used to schedule EMAN, a bio-imaging workflow application. It results into 1.5 to 2.2 time better optimization of make span and load balance. Genetic algorithm was used to find the schedule for workflow application that meet up the user defined budget and deadline. Multi-objective MGrid resource service composition and optimal-selection (MO-MRSCOS) problem is resolved by PSO. It minimizes execution time, cost, and maximize the reliability.PSO algorithm that assigned Cloud resources to workflow application. It consider both computation cost and data transmission cost when finding schedule. PSO attained 3 times cost saving as compared with BRS (Best Resource Selection. Existing schedulingalgorithms does not believe reliability and availability. Therefore there is a need to implement a scheduling algorithm that can advance the availability and reliability in cloudenvironment.
This paper presents a review study of various task schedulingalgorithms in cloudenvironment including: RR, MaxMin, MinMin, FCFS, MCT, PSO, and GA, with a case study on modified round robin (MRR) algorithm. The MRR algorithm has been tested using CloudSim toolkit. The results show that when using the MRR algorithm to schedule a number of Cloudlets over a number of VMs, the average waiting of run time becomes less than when using RR in the same environments. Thus, it is advisable to use the proposed MRR for tasks scheduling in cloud computing, because it reduces the average waiting time and keeps the good features of the RR such as fairness, avoiding starvation, based on simple rule, dynamic based on CC environment situations, and suitable for load balancing.
As opposed to normal workflows the big data workflows posses certain special characteristics. The big data workloads are highly data intensive and compute intensive. The tasks in the workflow receive massive data input from various data sources and also it uses several servers to process and store data. It is very difficult to predict the amount of data incoming forehand and how much resources would be used by the workflows to execute the big data task. The format of data is unpredictable and thus the number and type of virtual data sources needed are highly dynamic. Next the big data analytics requires parallel processing in many servers and how much virtual servers needed, its type and when it is needed also dynamic. The decision of the resource allocation cannot be static and it should be decided dynamically. The execution of scientific workflows is challenging in terms of data scaling, computational complexity, dynamic resource allocation and issues with collaboration in heterogeneous environment . Further the cloud provider should provide robust workflow application integration with their cloudenvironment to gain the user satisfaction and cost benefit.
Abstract— Scheduling is a fundamental operating system function, since almost all computer resources are scheduled before use. The CPU is one of the primary computer resources. Central Processing Unit (CPU) scheduling plays an important role by switching the CPU among various processes. A processor is the important resource in computer; the operating system can make the computer more productive. The purpose of the operating system is that to allow the process as many as possible running at all the time in order to make best use of CPU. The high efficient CPU scheduler depends on design of the high quality schedulingalgorithms which suits the scheduling goals. In this paper, we reviewed various fundamental CPUschedulingalgorithms for a single CPU and shows which algorithm is best for the particular situation.
There has been various types of scheduling algorithm exist in distributed computing system. Most of them can be applied in the cloudenvironment with suitable verifications. The main advantage of job scheduling algorithm is to achieve a high performance computing and the best system throughput. Traditional job schedulingalgorithms are not able to provide scheduling in the clo ud environments. According to a simple classification, job schedulingalgorithms in cloud computing can be categorized into two main groups; Batch mode heuristic schedulingalgorithms (BMHA) and online mode heuristic algorithms. In BMHA, Jobs are queued and collected into a set when they arrive in the system. The scheduling algorithm will start after a fixed period of time. The main examples of BMHA based algorithms are; First Come First Served scheduling algorithm (FCFS), Round Robin scheduling algorithm (RR), Min–Min algorithm and Max–Min algorithm. By On-line mode heuristic scheduling algorithm, Jobs are scheduled when they arrive in the system. Since the cloudenvironment is a heterogeneous system and the speed of each processor varies quickly, the on-line mode heuristic schedulingalgorithms are more appropriate for a cloudenvironment. Most fit task scheduling algorithm (MFTF) is suitable example of On-line mode heuristic scheduling algorithm.
Abstract— Cloud computing is a recent advancement in the internet world .The internet world has been revolutionized by this provision of shared resources. Cloud service providers compete for scalability of virtualized resources dynamically. The performance and efficiency of cloud computing services always depend upon the performance of the user tasks submitted to the cloud system. Cloud services performance can be significantly improved by scheduling the user tasks. The cost emerging from data transfers between resources as well as execution costs must also be taken into consideration while optimizing system efficiency in scheduling. Moving applications to a cloud computing environment trigger the need for scheduling as it enables the utilization of various cloud services to facilitate execution. Service provider’s goal is to utilize the assets effectively and increase benefit. This makes task scheduling as a core and challenging issue in cloud computing. It is the process of mapping task to the available resource. This paper presents a detailed study of various task scheduling methods existing for the cloudenvironment.
Abrishami Saed , et al. (2013)  have in the past designed and also studied a partial critical path ,PCP that is a two phase scheduling algorithm which aspires to lessen the money necessary for work-flow rendering although meeting a user defined deadlines. Even so , they believed clouds and utility grids are different from each other in 3 forms : on-demand resource provisioning, homogeneous networks, and the pay-as-you-go pricing model. In this paper, researchers have possessed the PCP algorithm for the Cloudenvironment and proposed two workflow schedulingalgorithms: IaaS Cloud Partial Critical Paths (IC-PCP) that is one phase algorithm, and IaaS Cloud Partial Critical Paths with Deadline Distribution (IC-PCPD2) which is two phase algorithm. Each algorithm has a very polynomial period sophistication rendering them suited selections for preparation large workflows. The simulation outcomes demonstrated that either algorithm possess the encouraging performance with IC-PCP accomplishing much better than IC-PCP2 practically in most cases.
Helmy with Dekdauk  introduced (RR) burst round-robin, which presents relative offer schedulingalgorithms in the attempts to consolidate the lower scheduling above the RR algorithm and support the concise task. (Mohanty et al).  The proposed (SRBRR) short resting bursts round-robin schedulingalgorithms engage the processors to use dynamic time quantum to form forms with a brief outstanding burst in the round-robin way. (Yaashuwanth with Ramesh)  Also created another algorithm for scheduling that uses intelligent time slices for Robin planning tasks for continuously working frameworks (Mostafa et al).  Proposed for the discovery of a better amount (quantum) of RR CPUschedulingalgorithms when all are stated in registered frameworks using number programming. (Yadav et al).  RR and SJF, Which proposed another calculation. From the investigation, the results demonstrate that this mixture is superior to unreserved RR. (Panda with Bhoi)  Suggests compelling round-robin algorithms, using the Minimum-Max scattering ratio of residual CPU (central processing unit) bursts time. This calculation outperforms the (RR) as far as normal turnaround time, normal holding time interval and a particular number of setting switches techniques. The weighted round-robin is another drawn close  introduction with the order of settling all recurring tasks to deactivate the VM. The weighted round-robin algorithm was conceived through the customary round robin. The proposed round robin designates functions for assets using the round-robin style, although the traditional round robin's account relies on the heaviness of payment demand rather than the current stack of virtual machines. Although constrained parameters were used in the results test, yet, the weighted round-robin was seen to be one of the best exhibitions related to time in the tested results.
Cloud computing is well known ability that proceeds the usage of the center remote located servers and network based applications in order to handle the information. It gives consumers economical usage requests without linking and entering their normal records placed while accessing of internet on any computing. This technique, hence gives; for reliable and consistent computing by storing data, management, task processing and bandwidth. Illustration in fig. 1 depicts Architecture of cloud computing;
In this work, Four CPUSchedulingAlgorithms (FCFS, SJF, PS, and RR) were discussed and measure three scheduling parameters metrics (average waiting time, average response time, average turnaround time) with respect to randomly generated arrival time or without arrival time. In order to know which algorithm gives best performance, we test different jobs sets (10-500 jobs/processes) under uniprocessor environment and shows the impact of scalability on different schedulingalgorithms. Here, burst time, priority and arrival time of each process is generated randomly using exponential probability distribution for every algorithm.
processes are arrived. If multiple processes having the same priorities are ready to execute, control of CPU is assigned to these processes on the basis of FCFS. Priority Scheduling can be pre-emptive and non-pre- emptive in nature. In pre-emptive priority scheduling the algorithm pre-empt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. A non-pre-emptive priority scheduling algorithm will simply put the new process at the head of the ready queue. A major problem with priority schedulingalgorithms is indefinite blocking, or starvation. A process that is ready to run but waiting for the CPU can be considered blocked. A priority scheduling algorithm can leave some low- priority processes waiting indefinitely. In a heavily loaded computer system, a steady stream of higher-priority processes can prevent a low-priority process from ever getting the CPU. Generally, one of two things will happen. Either the process will eventually be run, or the computer system will eventually crash and lose all unfinished low- priority processes. In  a priority scheduling algorithm are describes, the process are schedule based on their antecedence rate and allocate to processor equating with the subsisting programming algorithm based on its duration and resource employment. Some advantages of pre-emptive approach are
Lin, et al.  design a non-linear programming method for determining the constrained multiport models problems, by bandwidth aware task scheduling (BATS), which is an innova- tive task scheduling algorithm. Furthermore, the algorithm allocates the appropriate amount of tasks to VMs, while including the CPU, energy, storage and network speed. Netjinda, et al.  emphasis on the situation that requires static task scheduling and consider that the work- flows are intermittently implemented. To efficiently determine the optimal solutions, PSO algorithm needs to perform two important functions, exploitation and exploration. Wang, et al.  recommend the least Job time consuming algorithm and Load Balancing Genetic Algorithm (JLGA) to find the optimum task circulation categorization in a dynamic cloudenvironment. Furthermore, proposed algorithm decreases the makespan time for tasks by han- dling the workload of the complete system. Due to the VMs stack stays in a realistic condition, it keeps away from the unwanted sources and extra concerns. In addition, ACO-LB algorithm efficiently assembles the appropriate resources at job finishing point and assistances in re- source allocation in a peer group . Abdulhamid, et al,  propose Global LCA (GBLCA) algorithm for solving the non-deterministic problem of secure scheduling of tasks by minimiz- ing the makespan and response time. Furthermore, Abdulhamid, et al,  use Dynamic Clus- tering LCA (DCLCA) algorithm for fault tolerance aware task scheduling by reducing the makespan and failure rate in cloud computing.