In both cases image construction is required prior to the application deployment scenario which is realised as a collection of Virtual Machines (VMs) containing appli- cation components. To the best of our knowledge, no current software solution provides capabilities to both generate base images that contain a functional operat- ing system and install and configure a cloudapplication automatically. This therefore provides the opportunity to create multiple different configurations of an application, where these different deployment configurations can be exploited, by selecting the most appropriate configuration to serve the required system load while saving energy. For comparison, a tool such as Packer  can be used to create golden images for multiple platforms from a single source configuration but does not provide support for the auto- mated installation of software into these images. Another example tool such as Vagrant  enables software devel- opment teams to create identical development environ- ments but does not provide a mechanism to automate the deployment of software into these environments.
The specified system recommends that a straightforward obligation does not take quite a while to run; in this way, there is no compelling reason to diminish offloading. Complex uses expend more energy than straightforward employments. Thus, it arranges errands as per their multifaceted nature. This strategy recommends that an obligation can be actualized inside of cloud utilizing offloading. The benefit of this strategy is that it gets progressively measure of devoured energy in both methods of usage in cloud and mobile device. Procedure utilizes pressure as a part of request to decrease the information volume. Its weakness incorporates errand scheduler is restricted sometimes and necessities the choice of every assignment profile data before all else.  Provided a middleware application that can circulate naturally distinctive layers of an application in the middle of server and device while it advances a few parameters, for example, delay, information exchange, cost and so forth.
In order to have Greener CloudComputing, more active research needs to be done with emphasis on practical experiments. The proposed work involves design and development of a unique techniques/algorithm which will not only reduce energy consumption and lower carbon emission by considering scheduling and machines provisioning, but also ensure better quality of service by taking care of consolidation issues that will satisfy clients and ensure 24/7 service availability. The consolidation issue will be considered practically based on the type and number of VMs used. CPUs utilization and primary memory will need to be allotted to consolidated VMs based on the size, speed of CPU and RAM. A technique based on the consolidation problem will be designed in order to allow data of different capacity to be dynamically allotted to VMs which will be of acceptable CPU and primary memory utilization. After tackling the problem of consolidation, VM provisioning will be another issue; a technique/algorithm will be designed to allow virtual machines not to be used more than overload capacity of total CPU, and primary memory usage. It will also ensure that the server does not stay underloaded, imbalance or idle. The technique will migrate the VMs when the server is overloaded, underloaded and idle. By designing this technique, better energy consumption will be achieved ensure better service to the clients. Another feature of the proposed method is scheduling mechanism by designing a technique that will prevent physical machines and virtual machines being used un-necessarily. The system will be able to monitor unused server (both physical and virtual machines) by putting them in a sleepy mode as well as alerting them to a standby mode in order to be ready to take the incoming tasks.
The scheduler has been tested with different kinds of DAGs generated at random as well as on real COMPSs applications. We have evaluated which combination of our proposed algorithm called multi-heuristic resource allocation (MHRA) provides a better solution and energy savings and the execution time in each case, and the effect on the cloud elasticity. Moreover, we have also evaluated the introduced overhead by measuring the time for getting the scheduling solutions for a different number of tasks, kinds of DAG, and resources, concluding that it is suitable for run-time scheduling.
With the increasing cost of electricity, Cloud providers consider energy con- sumption as one of the major cost factors to be maintained within their infrastructure. Consequently, various proactive and reactive management mechanisms are used to efficiently manage the cloud resources and reduce the energy consumption and cost. These mechanisms support energy-awareness at the level of Physical Machines (PM) as well as Virtual Machines (VM) to make corrective decisions. This paper introduces a novel Cloud system architecture that facilitates an energyaware and efficient cloud operation methodology and presents a cost prediction framework to estimate the to- tal cost of VMs based on their resource usage and power consumption. The evaluation on a Cloud testbed show that the proposed energy-aware cost pre- diction framework is capable of predicting the workload, power consumption and estimating total cost of the VMs with good prediction accuracy for vari- ous Cloudapplication workload patterns. Furthermore, a set of energy-based pricing schemes are defined, intending to provide the necessary incentives to create an energy-efficient and economically sustainable ecosystem. Further evaluation results show that the adoption of energy-based pricing by cloud and application providers creates additional economic value to both under different market conditions.
Cloudcomputing is basically a combination of diverse range of systems which are connected in either private of public connections to give an infrastructure of applications, file storage as well as data that is both dynamic and scalable. Cloudcomputing have been utilized as a practical platform through which customers are able to approach and experience direct cost advantages as it has the ability of transforming a data center to an environment that is variable priced from a setup of capital intensive. Cloudcomputing I founded on a number of ideologies with the foremost idea being the doctrine of reusing IT capability. Cloudcomputing brings about a difference that can only be compared to the traditional grid computing concepts as well as the concepts of distributed computing and utility computing or even autonomic computing with it expanding the boundaries of different organizations and corporations across the world. According to Forrester, Clod computing can be termed as a system or a pool of abstracted, much scalable and managed compute infrastructure with ability and capacity of hosting end- client applications and billed by consumption.
Anton Beloglazov a,, Jemal Abawajyb, Rajkumar Buyya. Cloudcomputing offers utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloudcomputing solutions that can not only minimize operational costs but also reduce the environmental impact. In this paper, we define an architectural framework and principles for energy-efficient Cloudcomputing. Based on this architecture, we present our vision, open research challenges, and resource provisioning and allocation algorithms for energy-efficient management of Cloudcomputing environments.
Cloudcomputing datacenters consume huge amounts of energy, which has high cost and large environmental impact. There has been significant amount of research on dynamic power management, which shuts down unutilized equipment in a datacenter to reduce energy consumption. The main consumers of power in a datacenter are servers, communications network and the cooling system. Optimization of power in a datacenter is a difficult problem because of server resource constraints, network topology and bandwidth constraints, cost of VM migration, the heterogeneity of workloads and the servers. The arrival of new jobs and departure of completed jobs also create workload heterogeneity in time. As a result, most of the previous research has concentrated on partial optimization of power consumption, which optimizes either server and/or network power consumption through placement of VMs. Temporal load aware optimization, minimization of power consumption as a function of time has vastly been studied. When optimization also included migration, then solution had been divided into two steps, in the first step optimization of server and/or network power consumption is performed and in the second step migration of VMs has been taken care of, which is not an optimal solution. In this work, we develop joint optimization of power consumption of servers, network communications and cost of migration with workload and server heterogeneity subject to resource and bandwidth constraints through VM placement. Optimization results in an integer quadratic program (IQP) with linear/quadratic constraints in number of VMs assigned to a job on a server. IQP can only be solved for very small size systems, however, we have been able to decompose IQP to master and pricing sub-problems which may be solved through column generation technique for systems with larger sizes. Then, we have extended the optimization to manage temporal heterogeneity of the workload. It is assumed that time-axis is slotted and at the end of each slot jobs makes probabilistic complete/partial release of the VMs that they are holding and there will also be new job arrivals according to a Poisson process. The system will perform re-optimization of power consumption at the end of each slot that also includes the cost of VM migration. In the re-optimization, VMs of unfinished jobs may experience migration while new jobs are assigned VMs. We have obtained numerical results for optimal power consumption for the system as well as its power consumption due to two heuristic VM assignment algorithms. The results show optimization achieves significant power savings compared to the heuristic algorithms. We believe that our work advances state-of-the art in dynamic power management of datacenters and the results will be helpful to cloud service providers in achieving energy saving.
Cloudcomputing provides solution to resource sharing through the application model provided through it. As more and more users interact with cloud, problem of reliability creeps in. Reliability enhancement can be ensured using the mechanisms of fault tolerance. Fault tolerance and energy efficiency has been researched upon issues that are still under consideration. In recent years, the growth of IT infrastructure has triggered the demand for computational power and has lead to the creation of huge data centres and has increased the energy demand. A solution to this problem is cloudcomputing. CloudComputing is among the most trending technologies on the Internet which fulfils the need of computationally intensive demands of users. CloudComputing offers access to shared pool of computing resources which includes storage space, computation power, network, applications and services on demand basis to the users over the internet. CloudComputing introduces the concept of Everything as a Service, mostly referred as XaaS where X is Software, Infrastructure, Hardware, Platform, Data, Business etc.
Task scheduling process in cloudcomputing needs to be carefully planned in order to improve the efficiency of the whole cloudcomputing system. If the user submitted tasks are not scheduled properly to the proper Virtual Machines (VMs), performance reduces in terms of cloud provider’s profit which makes system not able to meet the client’s requirements. Thus, task scheduling plays a vital role in cloudcomputing since which affects the performance in great deal. In every cloud provider, a Task scheduling algorithm is utilized to manage the incoming tasks with proper scheduling mechanism in finding the best suitable VM so that it benefits the cloud provider and consumer. Moreover, these scheduling algorithms must adapt the dynamic environment where the incoming tasks keep on changing based on some parameters such as its size, cost, deadlines and other QoS metrics. In the same way, the cloud providers have certain constrains such as profit, resource utilization, energy consumption which is the basic requirement of any cloud provider. Additionally, the cloud provider must have capability to accommodate sudden increase in client requests which leads to increase in resource utilization, cost, and energy consumption. In such dynamic environment, the static task scheduling algorithms are suitable since the users may expect utilize thousands of virtualized resources in a form of VMs at any time. This complexity can be easily tackled by virtualization if there is an efficient task scheduling mechanism is employed. Thus, scheduling the user task to the best VM considered as a challenging issue in cloudcomputing in assigning the task to resources (VM) in an efficient manner. Currently, various types of scheduling algorithms were presented by various researchers. This includes static scheduling, dynamic scheduling, heuristic scheduling and various types of hybrid algorithms. Most of the research works reveals that the task scheduling problem in cloudcomputing is a type of NP-complete where heuristic algorithms are more suitable, due to its dynamic and heterogeneous nature. There are few parameters which plays a major role in fine tuning the performance of any task scheduling mechanisms. These parameters greatly affect the efficiency of the cloudcomputing in both cloud provider and cloud consumer perspectives. The efficient task scheduling mechanism must address the few performance metrics such as resource utilization, less computational cost, low energy consumption, scalability as part of the cloud provider’s benefit. As part of cloud user’s concern, the scheduling algorithm must provider minimum makespan, reduced response time and low cost in getting the service. Moreover, load balancing
Cloudcomputing is one of the emerging trends in the computing technologies which change the way the people using IT resources for business and other purposes. The demand for energy efficient resource management techniques in cloudcomputing is increasing dramatically due to its growth in financial, business, healthcare, governance, social and web applications. One of the main concerns in this technology is to reduce the cost of hardware, software, power and maintenance. To fulfil the higher demand, services providers are increasing large scale data centres that consume high volume of electric power which makes the negative impact in nature. In this paper, we present the comprehensive review of energyaware techniques available for data centres which makes optimum allocation of resources and selection algorithms for virtual machines in the cloud.
including the selection of the most suitable energy efficient runtime environment (e.g. JVM), into one or more VM images ready for deployment. It is a conceptual component in the architecture with a set of sub-components that package the ap- plication and make it ready for deployment. The Programming Model Runtime (PMR) deals with the orchestration of the task executions. The PMR component is in charge of detecting the dependencies among the task invocations and managing their proper execution in the remote resources. The Programming Model Packager (PMP) component creates the bundles of a Programming Model application. More precisely, it will pack method calls and tasks taking into account their requirements, or any other constraints pointed out by the developer. It will also generate the Service Manifest of the PM application. The Application Descriptor Tool is a graphical tool that assists developers in creating a Service Manifest. It helps the user to build an Open Virtualization Format (OVF) description document that describes the relation between different VMs and the software installed in them to later be used by the Application Packager to build the Service Manifest to submit to the PaaS layer. The Application Packager component is in charge of packaging non-PM applications. This component will take into account the template filled by the user in OVF format to package the software with the different requirements. It also generates a Service Manifest to submit to the PaaS layer. The VM Image Constructor (VMIC) uses the application packages and the service manifest or application descriptor to create VM images that can be deployed in the PaaS layer. The Application Uploader interacts with the Application Manager to register the final VMs ready for deployment. It essentially serves the PM plug-in and the Application Packager once images have been completed.
This section presents some of the existing works related to resource augmentation by task offloading from a single user system onto a server. Nir, et al. offered an innovative research directing the space of Cyber Foraging (CF) which permits the user systems to offload substantial computations to resourceful computing nodes. They minimized the total energy consumption as well as used a centralized architecture handling the task scheduling. This model focused the optimization methodology by adding an economic element into it. Zhang, et al. provided the characterization of both workload and machine heterogeneity to compute clusters. A heterogeneity- aware outline was outlined to regulate numeral machines with a stability between energy savings and
Organizations that wish to build local clouds do so using commodity hardware. This may mean that the cloud is made up of several different hardware set ups. Even when a cloud is initially built using one type of hardware, the nature of a cloud often means it will be expanded by adding new and different hardware throughout the course of its lifetime. In terms of scalability, the amount of compute nodes will generally increase rapidly over time. Given this heterogeneous nature, the nodes used will have different energy consumption footprints. The administrative cloud components (cloud, cluster, and storage controllers) need to be continuously operating for users to access the resources provided by the compute nodes. This is not true of the compute nodes. Depending on the amount of requests given by users, it is not necessary to have all compute nodes operating at any given time.
One of the keys faced challenges in this field is how to reduce the massive amount of energy consumption in cloudcomputing data centers. To address this issue, several power-aware virtual machine (VM) allocation and consolidation approaches area unit projected to scale back energy consumption with efficiency. However, most of these existing economical cloud solutions save energy value at a value of the numerous performance degradation. The propose a unique technique for genetic primarily based dynamic consolidation of VMs supported adaptational utilization thresholds, that ensures a high level of meeting the Service Level Agreements (SLA). The evaluated the proposed algorithms through extensive simulations on a large-scale experimental setup using workload traces. The experiments show that our planned strategy incorporates a higher performance than different ways, not only in high QoS but also in less energy consumption. In addition, the advantage of its reduction on the amount of active hosts is far a lot of obvious, particularly once it's below extreme workloads..
Artificial Bee colony algorithm is used for vm allocation and reducing energy consumption. and then compare it with genetic algorithm for power aware in cloud. ABC is Inspired by the behaviour of Honey bees .There are two types of bees Employed bees and Unemployed bees, where employee bees are expert in exploiting the food source and share the information by waggle dance whether unemployed bees calculate fitness value and by which they find the new food source by observing that waggle dance. In vm allocation technique vm are allocated to physical machine in data center and paper conclude that ABC algorithm work better than existing GAPA Algorithm.
In , the analysis of mobile cloudcomputing: its architecture, applications, and approaches are proposed focusing on providing general understanding of the emerging concept of mobile cloudcomputing. In , mobile cloudcomputing: implications and challenges is presented. This paper discusses various aspects of mobile cloudcomputing including legal issues. In , Mobile storage expansion in mobile cloudcomputing taxonomy, methods, and concerns is presented. It was noted that data generation impacts on storage and life span of battery in mobile cloudcomputing. The paper focuses on mitigating this situation. In , context-aware computation o ﬄ oading for the mobile cloudcomputing which is the study of requirements, review and guide for design is proposed. Computational offloading helps to address the issues of performance and security. The dynamic nature in the development of mobile cloudcomputing was discussed in this paper in relation to designs. In , review of mobile cloudcomputing is presented. In , Mobile cloudcomputing: A survey is proposed. The challenges and likely solutions to the effective utilization of mobile cloudcomputing was also discussed. In , mobile cloudcomputing is examined. Various issues were surveyed in relations to mobile cloudcomputing and application areas were also discussed. In , cloudcomputing for mobile world is presented. In , a security structure for mobile cloud applications is proposed. The paper briefly examines mobile cloudcomputing models and discussed security concerns. Thereafter, a security structure is proposed for mobile cloudcomputing. In , an assessment of mobile cloudcomputingapplication model is proposed. Smart phones have limits of power, storage and mobile energy that can be mitigated by cloudcomputing. The focus of the paper is on how constraints relating to mobile cloudapplication models can be resolved. In , mobile cloudcomputing: the forthcoming of cloud is proposed. The paper examined mobile cloudcomputing architecture and also discussed some challenges and proffered solutions. In , Resource usage optimizing in Mobile CloudComputing is proposed. The main focus is on resource usage for virtual machine in relation to mobile cloudcomputing. Various architectures were designed and implemented to enhance optimum utilization of resources on the cloud. In , towards securing mobile cloudcomputing: a survey is presented. Mobile cloudcomputing is anticipated to grow quickly, but this is being hindered by security concerns. Mobile cloudcomputing architecture and application
The widespread diffusion of Infrastructure-as-a-Service and cloudcomputing paradigms requires large-scale data centers with thousands of running nodes and high energy demands, thus causing relevant economical and environmental costs. In this perspective, the paper presents an energy-aware consolidation strategy based on predictive control, in which virtual machines are migrated among nodes to reduce the number of active units. To describe a general cloud infrastructure, a discrete-time dynamic model is presented together with constraints. The migration strategies of virtual machines are obtained by solving finite- horizon optimal control problems involving integer variables. To reduce the computational effort, approximate solutions are searched for via Monte Carlo optimization. Besides power savings, the proposed method allows one to reduce violations of the service level agreement and aggressive on or off cycles of nodes. To showcase the effectiveness of the proposed approach, preliminary simulation results are provided.
Having such tools that would help understand how the energy has been consumed in a system is essential in order to facilitate software developers to make energy-aware programming decisions. Schubert et al  state that the developers lack the tools that indicate where the energy-hungry sections are located in their code and help them better optimize their code for enhancing energy consumption more accurately instead of just relying on their own intuitions. In their work, they proposed eprof, which is a software proﬁler that narrates energy consumption to code locations; therefore, it would also help developers make better energy-aware decisions when they re-write their code . For example, with storing data on a disk, software developers might choose between storing the data in an uncompressed format or a compressed format, which would require more CPU resources. Compressed data has been commonly suggested as a way to reduce the amount of I/O needed to be performed and therefore reducing the energy based on the hypothesis that the CPU can process the task of compression and decompression with less energy than the task of transferring large data from and to the disk . However, that would depend on the data being processed. In fact, some conducted experiments in  with eprof proﬁling tool show that the process of compressing and decompressing the data consumes signiﬁcantly more energy than the process of transferring large amount of uncompressed data because the former would use more CPU resources than the latter. So, it can be a controversial issue depending on the application domain. Thus, having such tools identifying where the energy has been consumed would help software developers to make more energy-aware decisions.
Cloudcomputing provides access to shared resources through Internet. It provides facilities such as broad access, scalability and cost savings for users. However, cloud data centers consume a significant amount of energy because of inefficient resources allocation. In this paper, a novel virtual machine consolidation technique is presented based on energy and temperature in order to improve QoS (Quality of Service). In this paper, two heuristic and meta-heuristic algorithms are provided called HET-VC (Heuristic Energy and Temperature aware based VM consolidation) and FET-VC (FireFly Energy and Temperature aware based VM Consolidation). Six parameters are investigated for the proposed algorithms: energy efficiency, number of migrations, SLA (Service Level Agreement) violation, ESV, time and space complexities. Using the CloudSim simulator, it is found that energy consumption can be alleviated 42% and 54% in HET-VC and FET-VC, respectively using our proposed methods. The number of VM migrations is reduced by 44% and 52% under HET-VC and FET-VC, respectively. The HET-VC and FET-VC can improve SLA violation by 62% and 64%, respectively. The Energy and SLA Violations (ESV) are improved by 61% under HET-VC and by 76% under FET-VC.