The scalability and flexibility are the most important features that drive the emergence of the Cloudcomputing. Cloud services and computing platforms offered by computing Clouds could be scaled across various concerns, such as geographical locations, hardware performance, software configurations. The computing platforms should be flexible to adapt to various requirements of a potentially large number of users.
Small companies such as Cloud Retail have already un- derstood the aids of using facilities in the cloud for most non-core operations. Customer’s advantage from the cost- cutting of scale and the extremely improved IT processes of cloud service providers. The chance to evade capital costs and experience expectable expenditures which scale up and down with the existing requirements of the busi- ness is very striking. Consumers with irregular or burst practice see incredible assistances, as they merely pay for resources when they are consuming them. Customers with steady procedure arrangements also advantage due to the minor cost of buying services than structuring them internal. Except IT is an essential capability of the business, most clienteles will not be capable to achieve the identical competences inexpensive by doing it them- selves. As one illustration, Google’s business email solu- tion is, on usual, ten times fewer cheap than internal email solutions.
The CloudComputing concept came into the spotlight in the year 1950 with access via thin/static clients and the implementation of mainframe computers. Then in 1961, John McCarthy delivered a speech at MIT in which he suggested that computing services will be readily available on demand service , just as other utility services such as water, electricity, telephone, and gas that computing can be sold like a utility. In the 21st century, this model has been referred to as utility computing or as cloudcomputing. Using cloudcomputing you can go with Pay-per-use or Pay-As-u-go Model. In 1999, Salesforce.com became the 1st company to enter the cloud arena, excelling the concept of providing enterprise-level applications to end users through the Internet. Then in 2002, Amazon came up with Amazon Web Services, providing services like computation, storage, and even human intelligence. In 2009, Google Apps, Hadoop, Salesforce.com, Manjra soft Aneka and Microsoft’s Windows Azure also started to provide cloudcomputing enterprise applications. Other companies like HP and Oracle also joined the stream of cloudcomputing, for fulfilling the need for greater data storage.
One of the key features of cloudcomputing that makes it so attractive to service providers and business owners alike, is its ability to allocate and de allocate resources in accordance to demand. An immediate question is how these resources maybe managed so that it is cost effective? If resource allocation in static, then it may result in immense wastage of resources. Even if the service provider is able to correctly forecast the peak load, and provide capacity, in advance, to handle such load without dynamism and elasticity there is wastage of resources during no-peak time as shown in Figure 1.
For marketing purposes, cloud vendors always promise to make their services reliable and available to user companies. However, owing to a wide range of potential reasons (e.g. unexpected internet disruptions and inadequate system maintenance of cloud vendors), cloud applications may sometimes become temporarily out-of-service. This event was found in previous reports  to occur on a regular basis, even with leading cloud vendors (e.g. Google and Microsoft) in the industry. A significant number of the respondents confirmed that this risk event can have a relatively high probability and frequency of occurrence (Figure 6). On the other hand, in the complicated cloud environment, IT services provided by cloud vendors may often be associated with a lot of hidden costs, e.g. disaster recovery costs, application configuration fees, and data loss insurance . These hidden costs may not always be made clear to user companies when they subscribe to the service. Moreover, in order to achieve higher profit levels, cloud vendors may gradually increase their service fees. Consequently, user companies may find that the actual costs of their subscribed cloud services are much higher than their original expectations. A significant number (over 56%) of the respondents perceived that this critical risk event has a high to medium probability and frequency of occurrence in current cloud practices.
Enterprises often don’t have the required expertise to build cloud-based solutions. The average medium-to-large company that has been in business for more than a few years typically has a collection of applications and services spanning multiple eras of application architecture from mainframe to client-server to commercial-off the-shelf and more. The majority of the skills internally are specialized around these different architectures. Often the system administrators and security experts have spent a lifetime working on physical hardware or on-premises virtualization. Cloud architectures are loosely coupled and stateless, which is not how most legacy applications have been built over the years. Many cloud initiatives require integrating with multiple cloud-based solutions from other vendors, partners, and customers. The methods used to test and deploy cloud-based solutions may be radically different and more agile than what companies are accustomed to in their legacy environments. Companies making a move to the cloud should realize that there is more to it than simply deploying or paying for software from a cloud vendor. There are significant changes from an architectural, business process, and people perspective. Often, the skills required to do it right do not exist within the enterprise.
The resource allocation in cloud environment is an important and challenging research topic. Verma et al.  formulate the problem of dynamic placement of applications in virtualized heterogeneous systems as a continuous optimization: The placement of VMs at each time frame is optimized to minimize resource consumption under certain perfor- mance requirements. Chaisiri et al.  study the trade-off between the advance reservation and the on-demand resource allocation, and propose a VM placement algorithm based on stochastic integer programming. The proposed algorithm minimizes the total cost of resource provision in infrastructure as a service (IaaS) cloud. Wang et al.  present a virtual appliance-based automatic resource provisioning framework for large vir- tualized data centers. Their framework can dynamically allocate resources to applications by adding or removing VMs on physical servers. Verma et al. , Chaisiri et al. , and Wang et al.  studycloud resource allocation from VM placement perspective. Bacigalupo et al.  quanti- tatively compare the effectiveness of different techniques on response time prediction. They study different cloud services with different priorities, including urgent cloud services that demand cloud resource at short notice and dynamic enterprise systems that need to adapt to frequent changes in the workload. Based on these cloud services, the layered queuing network and historical performance model are quantitatively compared in terms of prediction accuracy. Song et al.  present a resource allocation approach according to application priorities in multiapplication virtualized cluster. This approach requires machine learning to obtain the utility functions for applications and defines the application priorities in advance. Lin and Qi  develop a self-organizing model to manage cloud resources in the absence of centralized management control. Nan et al.  present opti- mal cloud resource allocation in priority service scheme to minimize the resource cost. Appleby et al.  present a prototype of infrastructure, which can dynamically allocate cloud resources for an e-business comput- ing utility. Xu et al.  propose a two-level resource management system with local controllers at the VM level and a global controller at the server level. However, they focus only on resource allocation among VMs within a cloud server [19,20].
Enterprises that move their IT to the cloud are likely to encounter challenges such as security, interoperability, and limits on their ability to tailor their ERP to their business processes. The cloud can be a revolutionary technology, especially for small start-ups, but the benefi ts wane for larger enterprises with more complex IT needs [ 10 ]. The cloud model can be truly disruptive if it can reduce the IT opera- tional expenses of enterprises. Traditional utility services provide the same resource to all consumers. Perhaps the biggest difference between the cloudcomputing ser- vice and the traditional utility service models lies in the degree to which the cloud services are uniquely and dynamically confi gured for the needs of each application and class of users [ 12 ]. Cloudcomputing services are built from a common set of building blocks, equivalent to electricity provider turbines, transformers, and distri- bution cables. Cloudcomputing does, however, differ from traditional utilities in several critical respects. Cloud providers compete aggressively with differentiated service offerings, service levels, and technologies. Because traditional ERP is installed on your servers and you actually own the software, you can do with it as you please. You may decide to customize it, integrate it to other software, etc. Although any ERP software will allow you to confi gure and set up the software the way you would like, “Software as a Service” or “SaaS” is generally less fl exible than the traditional ERP in that you can’t completely customize or rewrite the soft- ware. Conversely, since SaaS can’t be customized, it reduces some of the technical diffi culties associated with changing the software. Cloud services can be com- pletely customized to the needs of the largest commercial users. Consequently, we have often referred to cloudcomputing as an “enhanced utility” [ 12 ]. Table 9.2 [ 5 ] shows the E-skills study for information and communications technology (ICT) practitioners conducted by the Danish Technology Institute [ 5 ] that describes the
In particular, SaaS 2.0 is focused on providing a more robust infrastructure and application plat- forms driven by SLAs. Rather than being characterized as a more rapid implementation and deploy- ment environment, SaaS 2.0 will focus on the rapid achievement of business objectives. This is why such evolution does not introduce any new technology: The existing technologies are com- posed together in order to achieve business goals efficiently. Fundamental to this perspective is the ability to leverage existing solutions and integrate value-added business services. The existing SaaS infrastructures not only allow the development and customization of applications, but they also facilitate the integration of services that are exposed by other parties. SaaS applications are then the result of the interconnection and the synergy of different applications and components that together provide customers with added value. This approach dramatically changes the software ecosystem of the SaaS market, which is no longer monopolized by a few vendors but is now a fully intercon- nected network of service providers, clustered around some “big hubs” that deliver the application to the customer. In this scenario, each single component integrated into the SaaS application becomes responsible to the user for ensuring the attached SLA and at the same time could be priced differently. Customers can then choose how to specialize their applications by deciding which com- ponents and services they want to integrate.
In this section, we study the energy-efficient technologies proposed for the development of green mobile networks. It was proposed that reduction of energy consumption at the base station can be significantly improved with proper designing of base station hardware [13,19,20]. Proper resource allocation techniques such as efficient use of the RF amplifier or switching off the transceiver equipment or base station can result in significant energy saving. In a mobile network, nearly two-thirds of the calls and 90% of the data services originate indoors. But research shows that maximum household and businesses suffer from poor indoor coverage. Good indoor coverage would both gain customer loyalty and generate more revenue for the service providers . However, utilization of a macrocell for attaining this sort of better indoor coverage comes with certain disadvantages. Hence, indoor solutions such as DASs (distributed antenna systems)  and picocells are gain- ing popularity in areas like large business centers, offices, and malls . These internal systems are installed by the operators. These indoor solutions provide satisfactory internal coverage, offload traffic from the heavily loaded macrocells, enhance the quality of ser- vice and facilitate high-data-rate services. Though the aforementioned solutions are more efficient than outdoor macrocells in terms of expenditure, still to provide indoor coverage for voice and high-speed data services, such solutions are still too pricey to be used in some scenarios such as SOHO (small office and home office) and home users. However, the development of femtocells provides low-cost indoor solutions for such situations. The self-deployment feature of femtocell eliminates the deployment cost incurred in case of other indoor coverage techniques such as picocells. The use of femtocell can make mobile networks greener than they are today.
One of the critical questions for channel companies to answer is whether or not cloud makes sense from an ROI perspective and if so, in what capacity and in which customer scenarios. This basic “economics of the cloud” discussion has been front-‐and-‐center in the channel for the better part of the last three to five years. The conversation is complicated, due in large part to the wide variety of cloud business model options and potential revenue structures to explore as well as differing customer needs. And yet, we are seeing solution providers move more decisively. Nearly 6 in 10 said they proactively pursued multiple segments of the various cloud business models in an attempt to quickly and comprehensively enter the cloud market, with medium and larger firms more likely to have gone this route than the smallest channel player (see Section 3 of this report for a detailed discussion of business models). As a result, a segment of companies have assembled quantifiable tracking metrics on revenue and profit margin, which can serve as a guidepost for channel companies moving more slowly into cloud.
First of all, cloudcomputing enables organizations to reduce their hardware costs (Miller,2009). When using cloud services, organizations no longer need high-powered and high rated computers to run applications within the cloud. This comes from the decreased needs for a processing power and storage space (Miller, 2009). Unlike traditional software, for running cloud applications computers need less memory. They also can be with smaller hard disks because there is not installation software. Thus, organizations can reduce costs by purchasing lower-priced computers. Since employing cloudcomputing, organizations do not have to do high investments in IT infrastructure. This especially concerns the larger organizations (Miller, 2009). Instead of investing a huge amount of money in a large number of powerful servers, the IT departments of those organizations can use the cloud` computing power to replace or improve the internal computing resources.
The memory, storage device 7 network communications are manged by the operating system of the basic physical cloud units. Open source software like LINUX can support the basic physical unit management and virtualization computing. Grid is inherently distributed by its nature over a LAN, WAN. Clouds are mainly distributed over MAN.
321 authorization. Currently the security model for Clouds seems to be relatively simpler and less secure than the security model adopted by Grids. Clouds infrastructure typically rely on Web forms to create and manage account information for end-users, and allows users to reset their passwords and receive new passwords via Emails in an unsafe and unencrypted communication. Note that new users could use Clouds relatively easily and almost instantly, with a credit card and/or email address. To contrast this , Grids are stricter about its security. Security is one of the largest concerns for the adoption of CloudComputing. We outline seven risks a Cloud user should raise with vendors before committing:
The scheduling scenario proceeds as follows: once a scheduling agent receives a task, it attaches it to one of its service queues (see Fig. 7.5). Tasks are received either by negotiating with other agents or directly from a workﬂow agent. The negotiation protocol is similar with the one in Fig. 7.6 and uses the DMECT SA’s relocation condition (Frincu, 2009a) as described in Section 7.5.2. Each service can execute at most k instances simultaneously. Variable k is equal to the number of processors inside the node pair. Once sent to a service a task cannot be sent back to the agent unless explicitly speciﬁed in the scheduling heuristics. Tasks sent to services are scheduled inside the resource by using the MinQL SA which uses a simple load balancing technique. Scheduling agents periodically query the service for completed tasks. Once one is found the information inside it is used to return the result to the agent responsible for the workﬂow instance. This passes the information to the engine which in turn passes the consequent set of tasks to the agent for scheduling. In order to simulate the cloud heterogeneity in terms of capabilities services offer different functionalities. In our case services offer access to both CASs and image processing methods. As each CAS offers different functions for handling mathematical problems so does the service exposing it. The same applies for the image processing services that do not implement all the available methods on every service. An insight on how CASs with different capabilities can be exposed as services is given in (Petcu, Carstea, Macariu, & Frincu, 2008).
Desktops are another computing resource that can be virtualized. Desktop virtualization is enabled by several architectures that allow remote desktop use, including the X Window System and Microsoft Remote Desktop Services. The X Window System, also known as X Windows, X, and X11, is an architecture commonly used on Linux, UNIX, and Mac OS X that abstracts graph- ical devices to allow device independence and remote use of a graphical user interface, including display, keyboard, and mouse. X does not include a windowing system—that is delegated to a window manager, such as KDE or Gnome. X is based on an MIT project and is now managed by the X.Org Foundation. It is available as open source software based on the MIT license. X client applications exist on Linux, UNIX, Mac OS X, and Windows. The X server is a native part of most Linux and UNIX systems and Mac OS X and can be added to Windows with the Cygwin platform. The X system was designed to separate server and client using the X protocol and lends itself well to cloudcomputing. X Windows is complex and can involve some troubleshooting, but because it supports many varied scenarios for its use, it has enjoyed a long life since it was first developed in 1984.
Expedia, Travelocity, Orbitz, Kayak, CheapoAir, Bestfares, MakeMyTrip.com, Cleartrip, and Carlson Wagonlit Travel are a few of the many travel and hospitality portals that offer elegant services via an interactive portal with a rich web experience for all travel and hospitality needs, breaking physical boundaries. We no longer need to visit any physical ofﬁces to accomplish these jobs—we can do so with the use of smart devices from anywhere at any time. MakeMyTrip.com is an example in the travel and hospitality industry to establish rapid growth over a decade, crossing the limit of geographical boundaries and leveraging these major technologies. The company was founded in the year 2000 with the aim of empow- ering Indian travelers with instant booking and comprehensive travel and hospitality packages in one web environment. It aimed to offer a range of best-value products and services based on leading technol- ogies for interactive customer engagement supported by round-the- clock support staff. With greater customer engagement based on cloud deployment, the company expanded its reach to global cus- tomers, breaking all geographical boundaries, and today it is extremely successful among Asian diasporas globally, including those in the United States, Australia, Europe, the Middle East, and Africa. Software Development Leverages
The past decades have witnessed the success of centralized comput- ing infrastructures in many application domains. Then, the emergence of the Internet brought numerous users of remote applications based on the technologies of distributed computing. Research in distributed computing gave birth to the development of grid computing. Though grid is based on distributed computing, the conceptual basis for grid is somewhat different. Computing with grid enabled researchers to do computationally intensive tasks by using limited infrastructure that was available with them and with the support of high processing power that could be provided by any third party, and thus allowing the researchers to use grid computing, which was one of the first attempts to provide computing resources to users on payment basis. This technology indeed became popular and is being used even now. An associated problem with grid technology was that it could only be used by a certain group of people and it was not open to the public. Cloud com- puting in simple terms is further extension and variation of grid computing, in which a market-oriented aspect is added. Though there are several other important technical differences, this is one of the major differences between grid and cloud. Thus came cloudcomputing, which is now being used as a public utility computing software and is accessible by almost every person through the Internet. Apart from this, there are several other properties that make cloud popular and unique. In cloud, the resources are metered, and a user pays according to the usage. Cloud can also support a continuously varying user demands without affecting the performance, and it is always available for use without any restrictions. The users can access cloud from any device, thus reaching a wider range of people.
Cloudcomputing is a concept which provide a facility to the user to delivering technology though the Internet servers. It is basically for processing and data storage. Without any use of traditional media Cloudcomputing allows vendors to convey services over the Internet. This method is called Software as a Service, or SaaS. Cloudcomputing help user to communicate more than one server at the same time and exchange information among them. Cloudcomputing can increase profitability by improving resource utilization. By improving resource utilization Cloudcomputing can increase profitability .