Praveen Gupta et. al. (2010)  described “Cloud computing has come out to be an interesting and beneficial way of changing the whole computing world. In this paper, we deal with the various methodologies adopted to handle all the processes and jobs concurrently executing and waiting into the web application and web server housed into the same system or different systems. Also, these different methods will be compared taking into account the same number of jobs, but varied environmental conditions and hence, the result would be formulated. Various issues like virtual resources, queuing strategies, resource managers etc. has been discussed here apart from the main coverage points. All these aspects will be closely studied, observed and proved with proper explanations”.
design, develop, propose a scheduling algorithm to outperform appropriate allocation map of jobs due to different factors. In job scheduling priority is the biggest issue because some jobs need to scheduled first then the other jobs which can wait for a long time. In this paper, a systematic review of various priority based job scheduling algorithms is presented. These algorithms have different perspective, working principles etc. This study concludes that all the existing techniques mainly focus on priority of jobs and reduces service response time and improving performance etc. There are many parameters that can be mentioned as factor of scheduling problem to be considered such as load balancing, system throughput, service reliability, service cost, service utilization and so forth”.
Vijindra and Sudhir Shenai. A (2012)  in their paper, have presented an algorithm for a cloud computing environment that could automatically allocate resources based on energy optimization methods. Then, we prove the effectiveness of our algorithm. In the experiments and results analysis, we find that in a practical Cloud Computing Environment, using one whole Cloud node to calculate a single task or job will waste a lot of energy, even when the structure of cloud framework naturally support paralleled process. We need to deploy an automatic process to find the appropriate CPU frequency, main memory’s mode or disk’s mode or speed. We have also deployed scalable distributed monitoring software for the cloud clusters.
In (Paul and Sanyal, 2011)  authors discussed “the issue of how to utilize cloud computing resources proficiently and gain maximum profits with the job scheduling system. For this purpose, they proposed a credit based scheduling algorithm to evaluate the entire group of tasks in the task queue and find the minimal completion time of all tasks. The proposed scheduling method considers the scheduling problem as an assignment problem in mathematics where the cost matrix gives the cost of a task to be assigned to a resource. However, the algorithm does not consider the processing time of a job, but other issues are considered such as the probability of a resource to be free soon after executing a task so that it will be available for the next waiting job”.
Bo Yin et. al.  discusses about multi-dimensional resource allocation. In this paper, study on the resource allocation at the application level is done, instead to map the physical resources to virtual resources for better resource utilization in cloud computing environment. A multi-dimensional resource allocation (MDRA) scheme for cloud computing is proposed that dynamically allocates the virtual resources among the cloud computing applications to reduce cost by using fewer nodes to process applications. In this model, a two-stage algorithm is adopted to solve this multi-constraint integer programming problem. The algorithm can dynamically reconfigure the virtual machines for cloud applications according to the load changes in cloud applications by assigning new applications on the working node instead of opening a new node. Experiment results show that this algorithm can save resources and increase resource utilization as well as centralize working nodes. MDRA provide resource to deal with requirements more steadily. Moreover, the proposed algorithm in the long run can save power efficiently when the demand of user is decreasing.
Cloud computing is a technique in which computing is delivered as a service rather than a product, whereby shared resources and software, and information is provided to consumers as a utility over networks. One of the main advantages and motivations behind Cloud Computing is reducing the CAPEX (capital expenditures) of systems from the perspective of cloud users and providers. To get the favour of cloud client needs to just interface with the web and after that client can perform operations without much using the intense computing and storage. Cloud computing administration is given by CSP (cloud service provider) according to client necessities. Keeping in mind the end goal to satisfy the request of diverse clients, they give distinctive nature of services. In order to conclude the term cloud, it is an executable environment having a dynamic behaviour of resources as well as clients giving various services .
Cloud computing has become one of the most projecting words in the IT world due to its design for providing computing service as a utility. The typical use of cloud computing as a resource has changed the scenery of computing. Due to the increased flexibility, better reliability, great scala- bility, and decreased costs have captivated businesses and individuals alike because of the pay- per-use form of the cloudenvironment. Cloud computing is a completely internet dependent technology where client data are stored and maintained in the data center of a cloud provider like Google, Amazon, Apple Inc., Microsoft etc. The Anomaly Detection System is one of the Intrusion Detection techniques. It’s an area in the cloudenvironment that is been developed in the detection of unusual activities in the cloud networks. Although, there are a variety of Intrusion Detection techniques available in the cloudenvironment, this reviewpaper exposes and focuses on different IDS in cloud networks through different categorizations and conducts comparative study on the security measures of Dropbox, Google Drive and iCloud, to illuminate their strength and weakness in terms of security.
1) Static Scheduling Methods: Static scheduling algorithms assume all tasks arrive at the same instant of time and they are independent of the system resource’s States and their availability. The basic scheduling policies like First-Come-First-Serve and Round-Robin methods are implemented in static mode. FCFS methods receive the tasks and queue them until resources are available and once they become available the tasks are allotted to them depending on their arrival time. No other criteria for scheduling are considered in this technique this makes it less complex in nature. On the other hand, RR schedule  uses the similar technique but it grants a resource to a task for a particular time interval . These tasks are then queued for the next execution. Yet Another heuristic method is Opportunistic load balancing which is based on their expected completion time. It schedules the tasks on the next available machines. It will bring about poor make-span because of the fact that it tries to use the resources making all machines busy at the same time.
Durairaj M, Kennan P In this paper, various types of virtualization is explained and brief comparison of open source based hypervisor virtualization is also elaborated. This can be used for design of strong framework for elastic resource management in cloud.
It is shifted from purchase of a product to pay as you go service, that is delivered to the users using internet and datacenters to store the data, known as Cloud, to store and maintain by different cloud service providers like ; Google, Salesforce etc. In cloud computing environment there should be an effectively and efficiently way to access the data with minimum time and limited resources with proper security, for this purpose, we can use the allocation and scheduling. To speed up the operations, various task scheduling techniques and algorithms were proposed so far but still most of them are not considering both Quality of Service (QoS) and virtual machine optimization factor, which is the upmost important parameter to satisfy the needs of user and effective utilization of resources.
Abstract—This paper deals with the optimized Multi class SVM classifier (OMSC) with Named entity extraction in cloudenvironment. The proposed OMSC handles with scheduling workflow in cloud computing where the data and files are transferred between the participants based on the different set of rules, with having the additional advantage of rules formation capability in which it is going to follow 22 rules templates. This has shown an improved performance against the traditional Multi class SVM classifier. The average f-score for the tested data sets is 81.04 % when compared to the existing classifier it is improved sign. The time complexity is decreased and as per the scheduling is concerned the execution time and response time is improved.
The performance of CPU depends on the scheduling policies adopted. These policies provide a schedule framework for execution of processes waiting in ready queue. Various scheduling algorithms as FCFS, SJF, Priority, Round Robin are already existing in the literature for operating system. In an interactive environment, these scheduling policies differ in their efficiency depending upon the characteristics and criteria to be considered .Fuzzy logic can be integrated to decide the task to be implemented according to sequential order. In the present paper, we propose the integration of fuzzy logic with existing Scheduling policies. Fuzzy logic concept is used to select among different values by using reasoning which is approximate and vague. By Fuzzy Logic we are able to decide on the basis of already known knowledge that in which order instructions are given by the user. The inference system in fuzzy logic enables the scheduler to invoke the order of task as in the beginning rather waiting for high and low priorities .In this paper we have designed a C++ simulator to compare various CPU parameters. With Linguistic variables introduced through Fuzzy Inference Rule , the efficiency of different scheduling policies have been evaluated.
Abstract— Cloud computing is a new paradigm in which computing is delivered as a service rather than a product, whereby shared resources, software, and information are provided to consumers as a utility over networks. Cloud computing is capable to provide massive computing or storage resources without the need to invest money or face the trouble to build or maintain such huge resources. Consumers only need to pay for using the services just like they do in case of other day to day utility services such as water, gas, electricity, etc. Scheduling algorithms are used for dispatching user tasks or jobs to a particular resource or data. Scheduling is a challenging job in the cloud because the capability and availability of resources vary dynamically. In this paper we provide review of various scheduling techniques used in cloud computing environment.
Distributed system has become the soul of today’s computing world and Distributed system have various forms like Grid computing, Ubiquitous computing, Cloud Computing. In the present competitive environment efficient utilization of resources is important, which is possible by efficient task and resource scheduling. For this purpose various task scheduling algorithms has been proposed by eminent scholars. Meta-Heuristic algorithms are the renowned algorithm to achieve the optimum result in term of execution time, load balancing and cost. These type of problem are known as NP-Hard problem. This paper performs the SWOT Analysis of few of the prominent Meta-Heuristic algorithms like genetic algorithm, Tabu Search, Simulated Annealing and optimization techniques. In this paper an extensive comparative study has been performed in terms of their strength, weakness, opportunity and threat to the already proposed algorithm to find out the scope for the further research in these prominent areas.
Scheduling of central processing unit is one of the greatest essential operations implemented over operating system (OS). There are many different algorithm used for scheduling but the main one called round-robin(RR) algorithm that achieved by optimum in period shared environment. The feature of RR algorithm is decrease the starvation besides integrates the improvement of priority. In this paper we propose a new optimization for the round- robin algorithm to improvement the CPUscheduling through all task allocated in CPU takes a new priorities depended on lowest value of burst time take highest priority when quantum time have the same value, then rescheduling to give a new priority after compute the burst time of tasks that will be reduce the average waiting time(A.W.T) and turnaround time (T.A.T) compare with standard round- robin algorithm and other related works.
Advancement in technological innovation and how current societies and organizations adapt and prop up in this cloud era are clear evidence that cloud computing is the future stage for individual and organizational needs as well.  Cloud computing has come to be one of the almost all appealing ground in both ICT (Information and Communication Technology) and scholarly researches. The fundamental purpose for this is most associations, organizations, and individuals can't purchase such costly assets (equipment and programming software environment). Cloud computing is increasingly transforming into the best stage for associations and individual; with this figuring advancement, clients can get ready to access various facility, approach, store and network system basic registering assets, working structures, for example, working frameworks or operating system, virtual work territories, web administrations and services, databases and research and development platform at minor expenses.It additionally uses explicit applications because "you pay as you go to the service" offered by the cloud computing environment. Many different supports of cloud computing environments include cost savings, adaptability, high availability, fast use, energy productivity and efficiency, and versatility. [4, 5, 6] We omit banters about our definition and use the worldwide definition announced by NIST , because "cloud computing is a design that facilitates suitable processing assets (e.g., network, server, Storage) is convenient for a demand system approach to a normal pool. Entities, applications, and services that can be speedily provisioned and free with minimum management attempt or service supplier interaction.
University private cloud is a typical application of cloud computing under the environment of higher education. It can simulate an environment which is similar to the environment of commercial cloud computing and build its own private cloud by the use of open source cloud platform. With the existing and future infrastructure, own private cloud could be deployed by the use of open source cloud platform. This private cloud can be deployed inside the firewall, which would avoid some potential security exposure during the process of transferring data to the third party data center. At the same time, school can provide students with platform service, resources service and process services through cloudenvironment. But, resource scheduling is always a key point and problem of building private cloud, which is directly related to the stability of system, the resource utilization as well as the Quality of Service of users. However, the resource scheduling system and the method of support exists in present cloud computing is unfit to private cloudenvironment. In this paper, a resource scheduling model and algorithm will be proposed based on the characteristics of university private cloud.
Cloud computing is an emerging technology, that based upon internet computing and share resources (software and hardware) depends upon their demand. Cloud computing works on its important feature known as virtualization in order to access remote and geographically distributed resources. Depending upon cloud service provider and user requirements, a number of virtual machines are used. So it is necessary to schedule VM request. Most of the businesses are migrating from on-premise to the cloud and millions of users access these services daily via the internet. So it is very important to apply appropriate scheduling technique to process a large amount of data and to do resource utilization more efficiently with better performance. Nowadays scheduling task becomes a challenge for researchers. So a number of algorithms are used to provide proficiency of task and resource scheduling. . In this paper, we have discussed different scheduling algorithms like FCFS (First Come First Serve), Max-Min, Min-Min.
________________________________________________________________________________________________________ Abstract - Cloud computing is a popular distributed computing model. It is based on pay as per use policy. It intends to share pool of resources globally. Job scheduling is one of the active area of research on cloudenvironment. The main aim of job scheduling is to achieve high performance on various computing application. A good job scheduling policy help to proper utilization of resource on virtual Machine (vm). Job scheduling algorithm solve many problems like NP complete, which plays an important role on cloudenvironment. In this paper different types of scheduling algorithm in different cloudenvironment are discussed.
In , a Pre-fetching based Dynamic Data Replication algorithm (PDDRA) is presented. PDDRA pre-replicates files based on file access history of the grid sites. This strategy improves job execution time, network usage, hit ratio and storage usage. But the best replica selection has not been studied in this paper. In  the authors presented an algorithm which considers the number of file requests and response time to place the replica in the best site within the cluster. By this way, mean job execution time is minimized. In  a Bandwidth Hierarchy based Replication (BHR) is presented. The algorithm decreases the data access time by maximizing network-level locality and avoiding network congestions. BHR strategy performs well only when the storage capacity is limited. In , Modified BHR is presented. This strategy replicates a file that has been accessed most and it is probable to be used in near future.