The past decades have witnessed the success of centralized comput- ing infrastructures in many application domains. Then, the emergence of the Internet brought numerous users of remote applications based on the technologies of distributed computing. Research in distributed computing gave birth to the development of grid computing. Though grid is based on distributed computing, the conceptual basis for grid is somewhat different. Computing with grid enabled researchers to do computationally intensive tasks by using limited infrastructure that was available with them and with the support of high processing power that could be provided by any third party, and thus allowing the researchers to use grid computing, which was one of the first attempts to provide computing resources to users on payment basis. This technology indeed became popular and is being used even now. An associated problem with grid technology was that it could only be used by a certain group of people and it was not open to the public. Cloud com- puting in simple terms is further extension and variation of grid computing, in which a market-oriented aspect is added. Though there are several other important technical differences, this is one of the major differences between grid and cloud. Thus came cloudcomputing, which is now being used as a public utility computing software and is accessible by almost every person through the Internet. Apart from this, there are several other properties that make cloud popular and unique. In cloud, the resources are metered, and a user pays according to the usage. Cloud can also support a continuously varying user demands without affecting the performance, and it is always available for use without any restrictions. The users can access cloud from any device, thus reaching a wider range of people.
In simple language, mobile commerce is the mobile version of e-commerce. Each and every utility of e-commerce is possible through mobile devices using the computa- tion and storage in the cloud. According to Wu and Wang , mobile commerce is “the delivery of electronic commerce capabilities directly into the consumer’s hand, anywhere, via wireless technology.” There are plenty of examples of mobile com- merce, such as mobile transaction and payment, mobile messaging and ticketing, mobile advertising and shopping, and so on. Wu and Wang  further report that 29% of mobile users have purchased through their mobiles 40% of Walmart products in 2013, and $67.1 billion purchases will be made from mobile device in the United States and Europe in 2015. This statistics proves the massive growth of m-commerce. In m- commerce, the user’s privacy and data integrity are vital issues. Hackers are always trying to get secure information such as credit card details, bank account details, and so on. To protect the users from these threats, public key infrastructure (PKI) can be used. In PKI, an encryption-based access control and an over-encryption are used to secure the privacy of user’s access to the outsourced data. To enhance the customer sat- isfaction level, customer intimacy, and cost competitiveness in a secure environment, an MCC-based 4PL-AVE trading platform is proposed in Dinh et al. .
Above all, this book emphasizes problem solving through cloudcomputing. At times you might face a simple problem and need to know only a simple trick. Other times you might be on the wrong track and need some background information to get oriented. Still other times, you might face a bigger problem and need direction and a plan. You will find all of these in this book. We provide a short description of the overall structure of a cloud here, to give the reader an intuitive feel for what a cloud is. Most readers will have some experience with virtualization. Using virtualization tools, you can create a virtual machine with the operating system install soft- ware, make your own customizations to the virtual machine, use it to do some work, save a snap- shot to a CD, and then shut down the virtual machine. An Infrastructure as a Service (IaaS) cloud takes this to another level and offers additional convenience and capability.
A common option for reducing the operating costs of only sporadically used IT infra- structure, such as in the case of the “warm standby” , is CloudComputing. As defined by NIST , CloudComputing provides the user with a simple, direct access to a pool of configurable, elastic computing resources (e.g. networks, servers, storage, applications, and other services, with a pay-per-use pricing model). More specifically, this means that resources can be quickly (de-)provisioned by the user with minimal provider interaction and are also billed on the basis of actual consumption. This pric- ing model makes CloudComputing a well-suited platform for hosting a replication site offering high availability at a reasonable price. Such a warm standby system with infrastructure resources (virtual machines, images, etc.) being located and updated in the Cloud is herein referred to as a “Cloud-Standby-System”. The relevance and po- tential of this cloud-based option for hosting replication systems gets even more ob- vious in the light of the current situation in the market. Only fifty percent of small and medium enterprises currently practice BCM with regard to their IT-services while downtime costs sum up to $12,500-23,000 per day for them .
The Heartbeat Service periodically collects the dynamic performance information about the node and publishes this information to the membership service in the Aneka Cloud. These data are collected by the index node of the Cloud, which makes them available for services such as reserva- tions and scheduling in order to optimize the use of a heterogeneous infrastructure. As already dis- cussed, basic information about memory, disk space, CPU, and operating system is collected. Moreover, additional data are pulled into the “alive” message, such as information about the installed software in the system and any other useful information. More precisely, the infrastructure has been designed to carry over any type of data that can be expressed by means of text-valued properties. As previously noted, the information published by the Heartbeat Service is mostly con- cerned with the properties of the node. A specific component, called Node Resolver, is in charge of collecting these data and making them available to the Heartbeat Service. Aneka provides different implementations for such component in order to cover a wide variety of hosting environments. A variety of operating systems are supported with different implementations of the PAL, and differ- ent node resolvers allow Aneka to capture other types of data that do not strictly depend on the hosting operating system. For example, the retrieval of the public IP of the node is different in the case of physical machines or virtual instances hosted in the infrastructure of an IaaS provider such as EC2 or GoGrid. In virtual deployment, a different node resolver is used so that all other compo- nents of the system can work transparently.
Commonly, agility, delivery speed, and cost savings entice companies to public clouds. Public cloud, for example, can free a company from having to invest in consolidating, expanding, or building a new data center when it outgrows a current facility, Kavis says. IT really doesn’t “want to go back to the well and ask management for another several mil- lion dollars,” thus it dives into the public cloud, he says. Stadtmueller says the public cloud is the least ex- pensive way to access compute and storage capacity. Plus, it’s budget- friendly because up-front infra- structure capital investments aren’t required. Businesses can instead align expenses with their revenue and grow capacity as needed. This is one reason why numerous startups choose all- public-cloud approaches.
In this paper we have presented some issues regarding task scheduling when services from various providers are offered. Problems such as estimating runtimes and trans- fer costs; service discovery and selection; trust and negotiation between providers for accessing their services; or making the independent resource scheduler cooper- ate with the meta-scheduler, have been discussed. As described much of the existing scheduling platforms are grid oriented and cloud schedulers are only beginning to emerge. As a consequence a MAS approach to the cloud scheduling problem has been introduced. MAS have been chosen since they provide greater ﬂexibility and are distributed by nature. They could also represent a good choice for scheduling scenarios where negotiation between vendors is required. Negotiation is particularly important when dealing with workﬂows where tasks need to be orchestrated together and executed under strict deadlines in order to minimize user costs. This is due to the fact that vendors have different access and scheduling policies and therefore selecting the best service for executing a task with a provided input becomes more than just a simple reallocation problem. The prototype system uses a single type of agents which combine multiple functionalities. The resulting meta-scheduler maintains the autonomy of each VO inside the cloud.
Bob Evans, senior vice president at Oracle, further wrote some revealing facts in his blog post at Forbes, clearing away all doubts people may have had in their minds. You may like to consider some interesting facts in this regard. Almost eight years ago, when cloud terms were not yet established, Oracle started developing a new generation of application suites (called Fusion Applications) designed for all modes of cloud deployment. Oracle Database 12c, which was released just recently, supports the cloud deployment framework of major data centers today and is the outcome of development efforts of the past few years. Oracle’s software as a service (SaaS) revenue has already exceeded the $1 billion mark, and it is the only company today to offer all levels of cloud services, such as SaaS, platform as a service (PaaS), and infrastructure as a service (IaaS). Oracle has helped over 10,000 customers to reap the beneﬁts of the cloud infrastructure and now supports over 25,000 users globally. One may argue that this could not have been possible if Larry Ellison hadn’t appreciated cloudcomputing. Sure, we may understand the dilemma he must have faced as an innovator when these emerging technologies were creating disruption in the business (www.forbes.com/sites/oracle/2013/01/18/oracle-cloud-10000- customers-and-25-million-users/).
Apart from the vendor-specific migration methodologies and guidelines, there are also proposals independent from a specific cloud provider. Reddy and Kumar proposed a methodology for data migration that consists of the following phases: design, extraction, cleansing, import, and verification. Moreover, they categorized data migration into storage migration, database migration, application migration, business process migration, and digital data retention (Reddy and Kumar, 2011). In our proposal, we focus on the storage and database migration as we address the database layer. Morris specifies four golden rules of data migration with the conclusion that the IT staff does not often know about the semantics of the data to be migrated, which causes a lot of overhead effort (Morris, 2012). With our proposal of a step-by-step methodology, we provide detailed guidance and recom- mendations on both data migration and required application refactoring to minimize this overhead. Tran et al. adapted the function point method to estimate the costs of cloud migration projects and classified the applications potentially migrated to the cloud (Tran et al., 2011). As our assumption is that the decision to migrate to the cloud has already been taken, we do not con- sider aspects such as costs. We abstract from the classification of applications to define the cloud data migration scenarios and reuse distinctions, such as complete or partial migration to refine a chosen migration scenario.
Case in point? Six in 10 channel firms say that cloud has generally strengthened their customer relationships, with just 15% claiming it has weakened them and roughly a quarter that said that their client bonds have remained the same. This is encouraging news given the fact that many in the channel have feared publicly that cloud would drive a wedge between them and their customers. There’s been rampant apprehension about such ill effects as a resurgence in vendor direct sales and end user customers choosing a self-‐service model for their IT solutions, i.e. procuring SaaS applications over the Internet. And while both of these trends are happening to a certain extent, CompTIA data suggest not at such dire expense to most of the channel, especially those that have reached a high level of cloud maturity today and intend to remain committed. That said, not all channel firms that adopt cloud will engender more good will with customers; some may simply have a customer set that is not cloud-‐ friendly, others may not gain sufficient expertise to provide value, etc.
Several different surveys on cloudcomputing in the logistics sector have been conducted in the past few months and published as studies. One of them was an online survey conducted by the software provider INFORM GmbH which showed that 68.3 % of the surveyed companies are ready right now to use cloudcomputing for logistics tasks — only 12.7 % have actually done it. The reasons for this are a lack of familiarity with the topic (29.5 %) and the security concerns mentioned by almost half of the surveyed companies. The possibility of having to rely on an external service provider was a barrier to using cloud technology for 13 % of the surveyed companies. The lack of industry-speci ﬁ c solutions was an obstacle for another 5 %. There seems to be a wide range of reasons. Flexible access (38 %), reduction in operating costs (25 %), faster implementation times for business processes (18 %), platform independence (12 %), and access to IT resources that would not be possible without cloudcomputing (7 %) were identi ﬁ ed as the ben- e ﬁ ts. According to the respondents, cloudcomputing solutions can be used for the communication between vendors and customers, controlling suppliers, and man- aging supply chain events. 25
At present there are few published materials on vCloud Director outside of offi cial VMware documentation, but the virtualization community has a long tradition of dedicated and passion- ate bloggers, speakers, and contributors producing timely content in easily digestible chunks. Writing a book on a new product like vCloud Director has been something of a moving target. Seeking to capitalize on the emerging cloudcomputing market. VMware has maintained an aggressive release cadence for the vCloud Director product, which is now in its second major release in three years, and we encourage the reader to use this book in conjunction with these online materials to dive deep where required. Although the core concepts and architecture will remain broadly consistent across future releases, these online resources will prove invaluable in keeping abreast of new functionality, issues, and features. This book points you to the best of them, but the best way to stay informed of breaking news in the virtualization world is to fol- low the VMware Planet v12n RSS feed (www.vmware.com/vmtn/planet/v12n/). For those of you familiar with social media tools like Twitter, the virtualization community is also active there on a daily basis.
collaborative design, apply cloudcomputing in manufacturing collaborative design and come up the concept of product collaborative cloud design. Study the product collaborative design theory based on cloudcomputing and the general key technology of cloudcomputing, semantic web, intelligent matching selection algorithm, STEP and XML technology, Twelve Lectures on Cloud Physics
“Clouds are about ecosystems, about large collections of interacting services including partners and third parties, about inter-cloud communication and sharing of information through such semantic frameworks as social graphs.” Transformationvsutility This, he adds, is clearly business transformational, whereas “computing services that are delivered as a utility from a remote data centre” are not. The pioneers in VANS/EDI methods – which are now migrating into modern cloud systems in offerings from software ﬁrm SAP and its partners, for example – were able to set up basic trading data exchange networks, but the cloud transformation now is integrating, in real-time, the procurement, catalogue, invoicing and other systems across possibly overlapping and much wider business communities.
Starting in 1958 the agency, then known as ARPA, was responsible for carrying out research and development on projects at the cutting edge of science and technology. While these typically dealt with national security–related matters, the agency never felt bound by military projects alone. One outcome of this view was significant work on general information technology and computer systems, starting with pioneering research on what was called time-sharing. The first computers worked on a one user–one system principle, but because individuals use computers intermittently, this wasted resources. Research on batch processing helped to make computers more efficient because it permitted jobs to queue up over time and thereby shrunk nonusage time. Time-sharing expanded this by enabling multiple users to work on the same system at the same time. DARPA kick-started time-sharing with a grant to fund an MIT-based project that, under the leadership of J. C. R. Licklider, brought together people from Bell Labs, General Electric, and MIT (Waldrop 2002). With time-sharing was born the principle of one system serving multiple users, one of the foundations of cloudcomputing. The thirty or so companies that sold access to time-sharing computers, including such big names as IBM and General Electric, thrived in the 1960s and 1970s. The primary operating system for time-sharing was Multics (for Multiplexed Information and Computing Service), which was designed to operate as a computer utility modeled after telephone and electrical utilities. Specifically, hardware and software were organized in modules so that the system could grow by adding more of each required resource, such as core memory and disk storage. This model for what we now call scalability would return in a far more sophisticated form with the birth of the cloud- computing concept in the 1990s, and then with the arrival of cloud systems in the next decade. One of the key similarities, albeit at a more primitive level, between time-sharing systems and cloudcomputing is that they both offer complete operating environments to users. Time-sharing systems typically included several programming-language processors, software packages, bulk printing, and storage for files on- and offline. Users typically rented terminals and paid fees for connect time, for CPU (central processing unit) time, and for disk storage. The growth of the microprocessor and then the personal computer led to the end of time-sharing as a profitable business because these devices increasingly substituted, far more conveniently, for the work performed by companies that sold access to mainframe computers.
Cloudcomputing evolved out of grid computing , which is a collection of dis- tributed computers intended to provide computing power and storage on demand [ 1 ]. Grid computing clubbed with virtualisation techniques help to achieve dynam- ically scalable computing power, storage, platforms and services. In such an envi- ronment, a distributed operating system that produces a single system appearance for resources that exist and is available is solicited most [ 2 ]. In other words, one can say that cloudcomputing is a specialised distributed computing paradigm. Cloud differs with its on-demand abilities like scalable computing power – up or down, service levels and dynamic confi guration of services (via approaches like virtualisation ). It offers resources and services in an abstract fashion that are charged like any other utility, thus bringing in a utility business model for comput- ing. Though virtualisation is not mandatory for cloud, its features like partitioning, isolation and encapsulation [ 3 ] and benefi ts like reduced cost, relatively easy administration, manageability and faster development [ 4 ] have made it an essential technique for resource sharing. Virtualisation helps to abstract underlying raw resources like computation, storage and network as one, or encapsulating multiple application environments on one single set or multiple sets of raw resources. Resources being both physical and virtual, distributed computing calls for dynamic load balancing of resources for better utilisation and optimisation [ 5 ]. Like any other traditional computing environment, a virtualised environment must be secure and backed up for it to be a cost saving technique [ 3 ]. Cloudcomputing is a trans- formation of computing by way of service orientation, distributed manageability and economies of scale from virtualisation [ 3 ].
In 1997, Professor Ramnath Chellappa of Emory University, defined cloudcomputing for the first time while a faculty member at the University of South California, as an important new “computing paradigm where the boundaries of computing will be determined by economic rationale rather than technical limits alone.” Even though the international IT literature and media have come forward since then with a large number of definitions, models and architectures for cloudcomputing, autonomic and utility computing were the foundations of what the community commonly referred to as “cloudcomputing”. In the early 2000s, companies started rapidly adopting this concept upon the realization that cloudcomputing could benefit both the Providers as well as the Consumers of services. Businesses started delivering computing functionality via the Internet, enterprise- level applications, web-based retail services, document-sharing capabilities and fully-hosted IT platforms, to mention only a few cloudcomputing use cases of the 2000s. The latest widespread adoption of virtualization and of service- oriented architecture (SOA) promulgated cloudcomputing as a fundamental and increasingly important part of any delivery and critical-mission strategy, enabling existing and new products and services to be offered and consumed more efficiently, conveniently and securely. Not surprisingly, cloudcomputing became one of the hottest trends in the IT armory, with a unique and complementary set of properties, such as elasticity, resiliency, rapid provisioning, and multi-tenancy.
Nowadays multimedia has turn out to be essential in every domain for its quality. On the other hand, due to the problems of handling peta-bytes of such kind of multimedia data in words of calculations, sharing, communications, as well as storage, there is a rising request of an substructure in the direction of having on- request admission towards a distributed group of configurable calculating assets (For instance, servers, linkages applications, stowage’s, as well as facilities). Cloudcomputing is the newest uprising in IT industry which is fundamentally connected to the budget. Increase amount of data sharing has led to various loads balancing. This results in demand of cloudcomputing. But, due to various security problems during sharing of data, some faults occur. So, this paper presented various techniques for security enhancement of various multimedia files (audio, text, image and video) in cloudcomputing using encryption algorithms using RSA and MD5.
as its stack, it might not have been able to achieve the scalability that it achieved on AWS. This by no means is a knock on Google or a declaration that AWS is any better than Google. Simply put, for scaling requirements like Instagram’s, an IaaS provider is a better choice than a PaaS. PaaS providers have thresholds that they enforce within the layers of their architecture to ensure that one customer does not consume so many resources that it impacts the overall platform, resulting in performance degradation for other customers. With IaaS, there are fewer limitations, and much higher levels of scalability can be achieved with the proper architecture. We will revisit this use case in Chapter 5. Architects must not let their loyalty to their favorite vendor get in the way of making the best possible business decision. A hammer may be the favorite tool of a home builder, but when he needs to turn screws he should use a screwdriver. Recommendation: Understand the differences between the three cloud service models: SaaS, PaaS, and IaaS. Know what business cases are best suited for each service model. Don’t choose cloud vendors based solely on the software stack that the developers use or based on the vendor that the company has been buying hardware from for years.
Abstract The surging demand for inexpensive and scalable IT infrastructures has led to the widespread adoption of Cloudcomputing architectures. These architec- tures have therefore reached their momentum due to inherent capacity of simplifi ca- tion in IT infrastructure building and maintenance, by making related costs easily accountable and paid on a pay-per-use basis. Cloud providers strive to host as many service providers as possible to increase their economical income and, toward that goal, exploit virtualization techniques to enable the provisioning of multiple virtual machines (VMs), possibly belonging to different service providers, on the same host. At the same time, virtualization technologies enable runtime VM migration that is very useful to dynamically manage Cloud resources. Leveraging these fea- tures, data center management infrastructures can allocate running VMs on as few hosts as possible, so to reduce total power consumption by switching off not required servers. This chapter presents and discusses management infrastructures for power- effi cient Cloud architectures. Power effi ciency relates to the amount of power required to run a particular workload on the Cloud and pushes toward greedy con- solidation of VMs. However, because Cloud providers offer Service-Level Agreements (SLAs) that need to be enforced to prevent unacceptable runtime per- formance, the design and the implementation of a management infrastructure for power-effi cient Cloud architectures are extremely complex tasks and have to deal with heterogeneous aspects, e.g., SLA representation and enforcement, runtime reconfi gurations, and workload prediction. This chapter aims at presenting the cur- rent state of the art of power-effi cient management infrastructure for Cloud, by care- fully considering main realization issues, design guidelines, and design choices. In addition, after an in-depth presentation of related works in this area, it presents some novel experimental results to better stress the complexities introduced by power-effi cient management infrastructure for Cloud.