business kant. En wat je in de toekomst zal gaan zien is dat je naar een situatie toe gaat waarin eigenlijk de business direct IT aan het afnemen is. Omdat zij heel goed gaan begrijpen hoe de IT werkt en wordt afgenomen als een service. Maar eigenlijk de IT bedrijven en de cloud providers zullen steeds meer direct hun services kunnen gaan leveren die begrijpbaar is voor afname vanuit de business. Waarin het afnemen van een bepaalde IT service niet allerlei gespecialiseerde IT kennis noodzakelijk voor is. Vergelijk het een beetje met een auto. Een aantal jaren geleden had men best wel veel verstand van een auto. Als er iets aan mankeerde kon je zelf nog wel kijken wat er eventueel mee was. Het is nu volledig elektronsich geworden. Een auto rijden is nu best wel op het niveau van hoge maturity. Je hoeft niet meer continue naar een garage. Hij wordt op afstand uitgelezen. Hoe die auto werkt. Sterker nog, als je kijkt naar Tesla, die wordt op afstand getest. Het is gewoon een computer aan het worden. Op dat vlak, als je dat vergelijkt met de autoindustrie zie je dat de IT steeds
Abstract— Intense advancement of cloudcomputing during the last years, convinced the experts to consider it as a proper and favorable substitution for traditional computing methods. Nowadays, many companies have moved their IT physical architecture to cloudcomputing platform for ease in managing and provisioning of different resources. In this paper a CloudComputing environment is created using a product suite of VMware vSphere, which is based on two main parts: VMware ESXi hypervisor for virtualization technology and both VMware vSphere Client and Virtual Center (vCenter) for environment management. The aim is to provide efficient solution for designing and implementing an architecture of cloudcomputing.
V. Priority Based Dynamic Resource Allocation in CloudComputing with Modified Waiting Queue (PBDRA): Priority based algorithm is dynamic resource allocation mechanism for preempt-able jobs in cloud . In order to attain the agreed SLA objective this algorithm dynamically responds to fluctuating work load by preempting the current executing task having low priority with high priority task and if preemption is not possible due same priority then by creating the new VM form globally available resources. If global resources are not available, task will be placed in waiting queue. When an appropriate VM become free that advanced reservation task will be selected from waiting queue and allocated for execution to that VM.
A cloud introduces an IT environment which is invented for the motive of remotely provisioning measured and scalable resources . Word “Cloud” in cloudcomputing is also known as “Internet”. So cloudcomputing is called as internet based computing in which many different services such as server, storage, virtualization and various application are given to the users and organization over the internet. Cloudcomputing uses the term “pay-per-usage” instead of traditional computing in which “own and use” technique is used. There are several issues in cloudcomputing paradigm but balancing the load is major issue (challenge) in cloudcomputing environment. Load balancing is a methodology which provides methods to maximize throughput, utilization of resources and performance of system . As a part of its services, it gives easy and flexible process to keep data or files and make them available for large scale of users . To make the use of resources most efficiently in cloud system, there are several load balancing algorithms.
3 | P a g e as a service (PaaS) which enables to deploy and dynamically scale Python and Java based Web applications. Finally, the top-most layer provides the users with ready to use applications also known as Software as a Service (SaaS)  . In addition, it is possible to observe the significant interaction between the services model in the Cloudcomputing which are Software as a service (SaaS), Platform as a service (PaaS) and Infrastructure as a Service (IaaS) as shown in Figure 3. Each one of these models provides unique service to the users in the Cloudcomputing environment.
The cloudcomputing can be defined as a collection of concepts, technologies, methodologies that enable to dynamically provision hardware and software resources as a services over internet on pay per use model with a objectives of achieving high resource utilization in a scalable cost effective manner .The cloud deployment models are defined in the basis of the location ownership access and management of cloud services. The Iaas ,Paas ,Saas as the Major services delivered by the cloud. This paper outlines about the storage which is another major service offered by cloud and is known as storage as service. The cloudcomputing provides rich benefits to the cloud clients such as costless services, elasticity of resources, easy access through internet, etc.Storage as a service is a model in which a large company rents space in their storage infrastructure to a small company or individual Keyword- cloudcomputing; platform as a service; software as a service; infrastructure as a service, Representational State Transfer.
The successful implementation of any software project depends upon the requirements. Change in requirements at any stage during the life cycle of software development is taken as a healthy process. However, making out this change in a co-located environment is somewhat easier than the distributed environment where stakeholders are scattered at more than one location. This raises many challenges i.e. coordination, communication & control, managing change effectively and efficiently and managing central repository. Thus, cloudcomputing can be applied to mini- mize these challenges among the stakeholders. We have used a case study to evaluate the frame- work using cloudcomputing.
Delivers computer infrastructure (typically a platform virtualization environment) as a service, along with raw storage and networking. Rather than purchasing servers, software, data center space, or network equipment, clients instead buy those resources as a fully outsourced service. With IaaS, many of the tasks related to managing and maintaining a physical data center and physical infrastructure (servers, disk storage, networking, and so forth) are abstracted and available as a collection of services that can be accessed and automated from code- and/or web-based management consoles. Developers still have to design and code entire applications and administrators still need to install, manage, and patch third-party solutions, but there is no physical infrastructure to manage anymore. Gone are the long procurement cycles where people would order physical hardware from vendors that would ship the hardware to the buyer who then had to unpackage, assemble, and install the hardware, which consumed space within a data center. With IaaS, the virtual infrastructure is available on demand and can be up and running in minutes by calling an application programming interface (API) or launching from a web-based management console. Like utilities such as electricity or water, virtual infrastructure is a metered service that costs money when it is powered on and in use, but stops accumulating costs when it is turned off. In summary, IaaS provides virtual data center capabilities so service consumers can focus more on building and managing applications and less on managing data centers and infrastructure.
B. Platform as a Service (PaaS) - In a e-commerce website, shopping cart, checkout and payment mechanism which are running on all Merchant’s servers are the example of PaaS. This is a cloud base environment, which you use to develop, test, run and manage our application. This service includes web servers, Dev Tools, Execution Runtime and online database. Platform-as-a-service (PaaS) refers to the services of Cloudcomputing that supply an on-demand environment for developing. Its approach is to give development environments according to our need, without the complexity of purchasing, creating or managing basic
CORBA is a specification introduced by the Object Management Group (OMG) for providing cross- platform and cross-language interoperability among distributed components. The specification was originally designed to provide an interoperation standard that could be effectively used at the indus- trial level. The current release of the CORBA specification is version 3.0 and currently the technology is not very popular, mostly because the development phase is a considerably complex task and the interoperability among components developed in different languages has never reached the proposed level of transparency. A fundamental component in the CORBA architecture is the Object Request Broker (ORB), which acts as a central object bus. A CORBA object registers with the ORB the inter- face it is exposing, and clients can obtain a reference to that interface and invoke methods on it. The ORB is responsible for returning the reference to the client and managing all the low-level operations required to perform the remote method invocation. To simplify cross-platform interoperability, inter- faces are defined in Interface Definition Language (IDL), which provides a platform-independent specification of a component. An IDL specification is then translated into a stub-skeleton pair by spe- cific CORBA compilers that generate the required client (stub) and server (skeleton) components in a specific programming language. These templates are completed with an appropriate implementation in the selected programming language. This allows CORBA components to be used across different runtime environment by simply using the stub and the skeleton that match the development language used. A specification meant to be used at the industry level, CORBA provides interoperability among different implementations of its runtime. In particular, at the lowest-level ORB implementations com- municate with each other using the Internet Inter-ORB Protocol (IIOP), which standardizes the inter- actions of different ORB implementations. Moreover, CORBA provides an additional level of abstraction and separates the ORB, which mostly deals with the networking among nodes, from the Portable Object Adapter (POA), which is the runtime environment in which the skeletons are hosted and managed. Again, the interface of these two layers is clearly defined, thus giving more freedom and allowing different implementations to work together seamlessly.
technology enables a vendor’s cloud software to automatically move data from a piece of hardware that goes bad or is pulled offline to a section of the system or hardware that is functioning or operational. Therefore, the client gets seam- less access to the data. Separate backup systems, with cloud disaster recov- ery strategies, provide another layer of dependability and reliability. Finally, cloudcomputing also promotes a green alternative to paper-intensive office functions. It is because it needs less computing hardware on premise, and all computing-related tasks take place remotely with minimal computing hard- ware requirement with the help of technological innovations such as virtual- ization and multitenancy. Another viewpoint on the green aspect is that cloudcomputing can reduce the environmental impact of building, shipping, hous- ing, and ultimately destroying (or recycling) computer equipment as no one is going to own many such systems in their premises and managing the offices with fewer computers that consume less energy comparatively. A consolidated set of points briefing the benefits of cloudcomputing can be as follows: 1. Achieve economies of scale: We can increase the volume output or pro-
Associations affecting their administrations to the cloud must know about the difficulties and dangers that they may look on verifying touchy data, exchange insider facts, managing privacy, uprightness, n accessibility issues, information misfortune, and framework blackouts because of assaults from programmers. Badamas (2012) lays importance on data security concerns especially confidentiality and integrity of data, which can derail the future of public cloudcomputing n Kamara and Lauter (2010) identified processes that could be utilized to support the certainty of existing cloud clients and draw in potential users. Perceived security is associated with the risk that users are exposed to when they are using cloudcomputing services. The different type of risks includes financial risk (monetary loss due to incorrect services), time risk (amount of time lost while using cloudcomputing), psychological risk (feelings of frustration or anxiety in using the cloud services) and privacy risk (when organizations confidential information is transmitted to unintended destinations). Gilbert (2011) recommends that associations want to focus on legally binding and consistence matters while executing cloud advancements n administrations.
Cloud services can be intimidating to set up and uphold with multi-cloud services can be even more threatening. Providing security in cloudcomputing is major issue. There are more security risks in single cloud as it is more prone to attacks. We described a new concept of multi-cloud to solve security problems. Multi-cloud is also called as inter clouds. Data can be stored in multiple numbers of clouds. Security regarding with data storage in multi-cloud is more popular than in single cloud due to its less risks of attacks. Despite that, there is remuneration to using a multi-cloud strategy, whatever the approach within it might be. Disaster recovery becomes easier if important or insightful data is kept redundantly across multiple servers. A multi-cloud strategy also means that as you need more resources during especially busy times, ability to scale and offload any processing. Alternatively, you can route requests to different cloud servers which are optimized for specific tasks. CloudComputing has several challenges that tends to be in peak in recent years such as Lack of resource, Security, ManagingCloud Spend, Compliance, Governance/ Control, Managing multi cloud services, Building a private cloud and Performance.
The resource allocation in cloud environment is an important and challenging research topic. Verma et al.  formulate the problem of dynamic placement of applications in virtualized heterogeneous systems as a continuous optimization: The placement of VMs at each time frame is optimized to minimize resource consumption under certain perfor- mance requirements. Chaisiri et al.  study the trade-off between the advance reservation and the on-demand resource allocation, and propose a VM placement algorithm based on stochastic integer programming. The proposed algorithm minimizes the total cost of resource provision in infrastructure as a service (IaaS) cloud. Wang et al.  present a virtual appliance-based automatic resource provisioning framework for large vir- tualized data centers. Their framework can dynamically allocate resources to applications by adding or removing VMs on physical servers. Verma et al. , Chaisiri et al. , and Wang et al.  study cloud resource allocation from VM placement perspective. Bacigalupo et al.  quanti- tatively compare the effectiveness of different techniques on response time prediction. They study different cloud services with different priorities, including urgent cloud services that demand cloud resource at short notice and dynamic enterprise systems that need to adapt to frequent changes in the workload. Based on these cloud services, the layered queuing network and historical performance model are quantitatively compared in terms of prediction accuracy. Song et al.  present a resource allocation approach according to application priorities in multiapplication virtualized cluster. This approach requires machine learning to obtain the utility functions for applications and defines the application priorities in advance. Lin and Qi  develop a self-organizing model to manage cloud resources in the absence of centralized management control. Nan et al.  present opti- mal cloud resource allocation in priority service scheme to minimize the resource cost. Appleby et al.  present a prototype of infrastructure, which can dynamically allocate cloud resources for an e-business comput- ing utility. Xu et al.  propose a two-level resource management system with local controllers at the VM level and a global controller at the server level. However, they focus only on resource allocation among VMs within a cloud server [19,20].
Several new AWS services were introduced in 2012; some of them are in a beta stage at the time of this writing. Among the new services we note: Route 53, a low-latency DNS service used to manage user’s DNS public records; Elastic MapReduce (EMR), a service supporting processing of large amounts of data using a hosted Hadoop running on EC2 and based on the MapReduce paradigm discussed in Section 4.6; Simple Workﬂow Service (SWF), which supports workflow management (see Section 4.4) and allows scheduling, management of dependencies, and coordination of multiple EC2 instances; ElastiCache, a service enabling Web applications to retrieve data from a managed in-memory caching system rather than a much slower disk-based database; DynamoDB, a scalable and low-latency fully managed NoSQL database service; CloudFront, a Web service for content delivery; and Elastic Load Balancer, a cloud service to automatically distribute the incoming requests across multiple instances of the application. Two new services, the Elastic Beanstalk and the CloudFormation, are discussed next.
Abstract The surging demand for inexpensive and scalable IT infrastructures has led to the widespread adoption of Cloudcomputing architectures. These architec- tures have therefore reached their momentum due to inherent capacity of simplifi ca- tion in IT infrastructure building and maintenance, by making related costs easily accountable and paid on a pay-per-use basis. Cloud providers strive to host as many service providers as possible to increase their economical income and, toward that goal, exploit virtualization techniques to enable the provisioning of multiple virtual machines (VMs), possibly belonging to different service providers, on the same host. At the same time, virtualization technologies enable runtime VM migration that is very useful to dynamically manage Cloud resources. Leveraging these fea- tures, data center management infrastructures can allocate running VMs on as few hosts as possible, so to reduce total power consumption by switching off not required servers. This chapter presents and discusses management infrastructures for power- effi cient Cloud architectures. Power effi ciency relates to the amount of power required to run a particular workload on the Cloud and pushes toward greedy con- solidation of VMs. However, because Cloud providers offer Service-Level Agreements (SLAs) that need to be enforced to prevent unacceptable runtime per- formance, the design and the implementation of a management infrastructure for power-effi cient Cloud architectures are extremely complex tasks and have to deal with heterogeneous aspects, e.g., SLA representation and enforcement, runtime reconfi gurations, and workload prediction. This chapter aims at presenting the cur- rent state of the art of power-effi cient management infrastructure for Cloud, by care- fully considering main realization issues, design guidelines, and design choices. In addition, after an in-depth presentation of related works in this area, it presents some novel experimental results to better stress the complexities introduced by power-effi cient management infrastructure for Cloud.
Apart from the vendor-specific migration methodologies and guidelines, there are also proposals independent from a specific cloud provider. Reddy and Kumar proposed a methodology for data migration that consists of the following phases: design, extraction, cleansing, import, and verification. Moreover, they categorized data migration into storage migration, database migration, application migration, business process migration, and digital data retention (Reddy and Kumar, 2011). In our proposal, we focus on the storage and database migration as we address the database layer. Morris specifies four golden rules of data migration with the conclusion that the IT staff does not often know about the semantics of the data to be migrated, which causes a lot of overhead effort (Morris, 2012). With our proposal of a step-by-step methodology, we provide detailed guidance and recom- mendations on both data migration and required application refactoring to minimize this overhead. Tran et al. adapted the function point method to estimate the costs of cloud migration projects and classified the applications potentially migrated to the cloud (Tran et al., 2011). As our assumption is that the decision to migrate to the cloud has already been taken, we do not con- sider aspects such as costs. We abstract from the classification of applications to define the cloud data migration scenarios and reuse distinctions, such as complete or partial migration to refine a chosen migration scenario.
Several different surveys on cloudcomputing in the logistics sector have been conducted in the past few months and published as studies. One of them was an online survey conducted by the software provider INFORM GmbH which showed that 68.3 % of the surveyed companies are ready right now to use cloudcomputing for logistics tasks — only 12.7 % have actually done it. The reasons for this are a lack of familiarity with the topic (29.5 %) and the security concerns mentioned by almost half of the surveyed companies. The possibility of having to rely on an external service provider was a barrier to using cloud technology for 13 % of the surveyed companies. The lack of industry-speci ﬁ c solutions was an obstacle for another 5 %. There seems to be a wide range of reasons. Flexible access (38 %), reduction in operating costs (25 %), faster implementation times for business processes (18 %), platform independence (12 %), and access to IT resources that would not be possible without cloudcomputing (7 %) were identi ﬁ ed as the ben- e ﬁ ts. According to the respondents, cloudcomputing solutions can be used for the communication between vendors and customers, controlling suppliers, and man- aging supply chain events. 25
Bob Evans, senior vice president at Oracle, further wrote some revealing facts in his blog post at Forbes, clearing away all doubts people may have had in their minds. You may like to consider some interesting facts in this regard. Almost eight years ago, when cloud terms were not yet established, Oracle started developing a new generation of application suites (called Fusion Applications) designed for all modes of cloud deployment. Oracle Database 12c, which was released just recently, supports the cloud deployment framework of major data centers today and is the outcome of development efforts of the past few years. Oracle’s software as a service (SaaS) revenue has already exceeded the $1 billion mark, and it is the only company today to offer all levels of cloud services, such as SaaS, platform as a service (PaaS), and infrastructure as a service (IaaS). Oracle has helped over 10,000 customers to reap the beneﬁts of the cloud infrastructure and now supports over 25,000 users globally. One may argue that this could not have been possible if Larry Ellison hadn’t appreciated cloudcomputing. Sure, we may understand the dilemma he must have faced as an innovator when these emerging technologies were creating disruption in the business (www.forbes.com/sites/oracle/2013/01/18/oracle-cloud-10000- customers-and-25-million-users/).
Hadoop MapReduce and the LexisNexis HPCC platform are both scalable archi- tectures directed towards data-intensive computing solutions. Each of these system platforms has strengths and weaknesses and their overall effectiveness for any appli- cation problem or domain is subjective in nature and can only be determined through careful evaluation of application requirements versus the capabilities of the solution. Hadoop is an open source platform which increases its ﬂexibility and adaptability to many problem domains since new capabilities can be readily added by users adopt- ing this technology. However, as with other open source platforms, reliability and support can become issues when many different users are contributing new code and changes to the system. Hadoop has found favor with many large Web-oriented companies including Yahoo!, Facebook, and others where data-intensive computing capabilities are critical to the success of their business. Amazon has implemented new cloudcomputing services using Hadoop as part of its EC2 called Amazon Elastic MapReduce. A company called Cloudera was recently formed to provide training, support and consulting services to the Hadoop user community and to pro- vide packaged and tested releases which can be used in the Amazon environment. Although many different application tools have been built on top of the Hadoop platform like Pig, HBase, Hive, etc., these tools tend not to be well-integrated offer- ing different command shells, languages, and operating characteristics that make it more difﬁcult to combine capabilities in an effective manner.