Is the cloud model reliable for the banks? Most of CC infrastructures include completely reliable and tested in time services which are built on servers with different levels of virtualized technologies and act through large data centers under service level agreements (SLAs) and in such agreements, the services need to be at the service of the customers and under process in 99.99% of the times or even better than this. Business suggestions have developed so as to help customers meet the requirements related to the quality of services and usually, to offer such agreements related to the level of services to their customers. From the users' viewpoint, where are the banks in this case, the cloud emerges as an access point separate from all of their computing requirements. Regardless of the geographical location, these cloud-based services are accessible as long as there is an Internet connection. Open standards and open source software are important factors in the growth of CC.
Both cloudcomputing and SOA share some core principles. First, both rely on the service concept to achieve the objectives. Service is a functionality or a fea- ture offered by one entity and used by another. For example, a service could be retrieving the details of the online bank account of a user. SOA and cloudcomputing use service delegation in that the required task is delegated either to service provider (in the case of cloudcomputing) or to other application or business components in the enterprise (in the case of SOA). Service delega- tion helps the people to use the services without being concerned about the implementation and maintenance details. Services could be shared by multi- ple applications and users, thereby achieving optimized resource utilization. Second, both cloudcomputing and SOA promote loose coupling among the components or services, which ensures the minimum dependencies among different parts of the system. This feature reduces the impact that any single change on one part of the system makes on the performance of the overall system. Loose coupling helps the implemented services to be separated and unaware of the underlying technology, topology, life cycle, and organiza- tion. The various formats and protocols used in distributed computing, such as XML, WSDL, Interface Description Language (IDL), and Common Data Representation (CDR), help to achieve the encapsulation of technology dif- ferences and heterogeneity among the various components used for combin- ing a business solution for solving the computing problems. Various services should be location and technology independent in cloudcomputing, and SOA can be used for achieving this transparency in the cloud domain.
John McCarthy was a visionary in computer science; in the early 1960s he formulated the idea that computation may be organized as a public utility, like water and electricity. In 1992 Gordon Bell was invited to and delivered an address at a conference on parallel computations with the provocative title Massively parallel computers: why not parallel computers for the masses? ; he argued that one-of-a- kind systems are not only expensive to build, but the cost of rewriting applications for them is prohibitive. Google Inc. was founded by Page and Brin, two graduate students in computer science at Stanford University; in 1998 the company was incorporated in California after receiving a contribution of $100, 000 from the co-founder and chief hardware designer of Sun Microsystems, Andy Bechtolsheim. Amazon EC2 was initially released as a limited public beta cloudcomputing service on August 25, 2006. The system was developed by a team from Cape Town, South Africa. In October 2008 Microsoft announced the Windows Azure platform; in June 2010 the platform became commercially available. iCloud, a cloud storage and cloudcomputing service from Apple Inc., stores content such as music, photos, calendars, and documents and allows users to access it from Apple devices. The system was announced on June 6, 2011. In 2012 the Oracle Cloud was announced (see www.oracle.com/us/ corporate/features/oracle-cloud/index.html )
It is foreseen that cloudcomputing could become a disruptive technology for mobile multimedia applications and services . In order to meet mul- timedia’s QoS requirements in cloudcomputing for multimedia services over the Internet and mobile wireless networks, Zhu et al.  proposed a multimedia cloudcomputing framework that leverages cloudcomputing to provide multimedia applications and services over the Internet. The prin- cipal conceptual architecture is shown in Figure 1.5. Zhu et al. addressed multimedia cloudcomputing from multimedia-aware cloud (media cloud) and cloud-aware multimedia (cloud media) perspectives. The media cloud (Figure 1.5a) focuses on how a cloud can perform distributed multimedia processing and storage and QoS provisioning for multimedia services. In a media cloud, the storage, CPU, and GPU are presented at the edge (i.e., MEC) to provide distributed parallel processing and QoS adaptation for various types of devices. The MEC stores, processes, and transmits media data at the edge, thus achieving a shorter delay. In this way, the media cloud, composed of MECs, can be managed in a centralized or peer-to-peer (P2P) manner. The cloud media (Figure 1.5b) focuses on how multimedia ser- vices and applications, such as storage and sharing, authoring and mashup, adaptation and delivery, and rendering and retrieval, can optimally utilize cloudcomputing resources to achieve better quality of experience (QoE). As depicted in Figure 1.5b, the media cloud provides raw resources, such as hard disk, CPU, and GPU, rented by the media service providers (MSPs) to serve users. MSPs use media cloud resources to develop their multime- dia applications and services, for example, storage, editing, streaming, and delivery.
The ﬁ rst step is the development phase. An App Provider implements a service following the guidelines described in chapter “ Empirical Qualitative Analysis of the Current CloudComputing Market for Logistics ” . The hard requirements are that RESTful interfaces and service calls must be implemented. Additionally, the BO- stack including BODs and Mini-BODS, of the Logistics Mall environment must also be used for communication and the BO Instance Repository must be used for storage of processed information and data shared by different apps of a process. Furthermore, an end-user and the service App has to contain the workbasket mechanism. Additionally, points are just suggestions to the provider, like the usage of the Java enterprise stack. The developers are free to choose their own pro- gramming language, but must make sure that their apps are executable within the cloud environment. This is ensured and veri ﬁ ed during the next phase of the Logistics Mall App Life-Cycle. The development phase ﬁ nishes with submitting the created App and integrating it into the Logistics Mall Marketplace (MMP). For the integration the app ’ s description, its price model and the date of availability are registered in the MMP. A Business App is only available until the speci ﬁ ed date. But ﬁ rst of all the App is not visible or purchasable for any customer as long as the Logistics Mall Veri ﬁ cation has not been successfully completed.
As an emerging state-of-the-art technology, cloudcomputing has been applied to an extensive range of real-life situations. Health care service is one of such important application fields. We developed a ubiquitous health care system, named HCloud, after comprehensive evaluation of requirements of health care applications. It is provided based on a cloudcomputing plat- form with characteristics of loose coupling algorithm modules and powerful parallel computing capabilities that compute the details of those indicators for the purpose of preventive health care service. First, raw physiological sig- nals are collected from the body sensors by wired or wireless connections and then transmitted through a gateway to the cloud platform, where storage and analysis of the health status are performed through data-mining tech- nologies. Last, results and suggestions can be fed back to the users instantly for implementing personalized services that are delivered via a heteroge- neous network. The proposed system can support huge physiological data storage; process heterogeneous data for various health care applications, such as automated electrocardiogram (ECG) analysis; and provide an early warn- ing mechanism for chronic diseases. The architecture of the HCloud platform for physiological data storage, computing, data mining, and feature selections is described. Also, an online analysis scheme combined with a Map-Reduce parallel framework is designed to improve the platform’s capabilities. Performance evaluation based on testing and experiments under various conditions have demonstrated the effectiveness and usability of this system.
Abstract The surging demand for inexpensive and scalable IT infrastructures has led to the widespread adoption of Cloudcomputing architectures. These architec- tures have therefore reached their momentum due to inherent capacity of simplifi ca- tion in IT infrastructure building and maintenance, by making related costs easily accountable and paid on a pay-per-use basis. Cloud providers strive to host as many service providers as possible to increase their economical income and, toward that goal, exploit virtualization techniques to enable the provisioning of multiple virtual machines (VMs), possibly belonging to different service providers, on the same host. At the same time, virtualization technologies enable runtime VM migration that is very useful to dynamically manage Cloud resources. Leveraging these fea- tures, data center management infrastructures can allocate running VMs on as few hosts as possible, so to reduce total power consumption by switching off not required servers. This chapter presents and discusses management infrastructures for power- effi cient Cloud architectures. Power effi ciency relates to the amount of power required to run a particular workload on the Cloud and pushes toward greedy con- solidation of VMs. However, because Cloud providers offer Service-Level Agreements (SLAs) that need to be enforced to prevent unacceptable runtime per- formance, the design and the implementation of a management infrastructure for power-effi cient Cloud architectures are extremely complex tasks and have to deal with heterogeneous aspects, e.g., SLA representation and enforcement, runtime reconfi gurations, and workload prediction. This chapter aims at presenting the cur- rent state of the art of power-effi cient management infrastructure for Cloud, by care- fully considering main realization issues, design guidelines, and design choices. In addition, after an in-depth presentation of related works in this area, it presents some novel experimental results to better stress the complexities introduced by power-effi cient management infrastructure for Cloud.
Relational databases are great for online transaction processing (OLTP) activities because they guarantee that transactions are processed successfully in order for the data to get stored in the database. In addition, relational databases have superior security features and a powerful querying engine. Over the last several years, NoSQL databases have soared in popularity mainly due to two reasons: the increasing amount of data being stored and access to elastic cloudcomputing resources. Disk solutions have become much cheaper and faster, which has led to companies storing more data than ever before. It is not uncommon for a company to have petabytes of data in this day and age. Normally, large amounts of data like this are used to perform analytics, data mining, pattern recognition, machine learning, and other tasks. Companies can leverage the cloud to provision many servers to distribute workloads across many nodes to speed up the analysis and then deprovision all of the servers when the analysis is finished.
Hadoop MapReduce and the LexisNexis HPCC platform are both scalable archi- tectures directed towards data-intensive computing solutions. Each of these system platforms has strengths and weaknesses and their overall effectiveness for any appli- cation problem or domain is subjective in nature and can only be determined through careful evaluation of application requirements versus the capabilities of the solution. Hadoop is an open source platform which increases its ﬂexibility and adaptability to many problem domains since new capabilities can be readily added by users adopt- ing this technology. However, as with other open source platforms, reliability and support can become issues when many different users are contributing new code and changes to the system. Hadoop has found favor with many large Web-oriented companies including Yahoo!, Facebook, and others where data-intensive computing capabilities are critical to the success of their business. Amazon has implemented new cloudcomputing services using Hadoop as part of its EC2 called Amazon Elastic MapReduce. A company called Cloudera was recently formed to provide training, support and consulting services to the Hadoop user community and to pro- vide packaged and tested releases which can be used in the Amazon environment. Although many different application tools have been built on top of the Hadoop platform like Pig, HBase, Hive, etc., these tools tend not to be well-integrated offer- ing different command shells, languages, and operating characteristics that make it more difﬁcult to combine capabilities in an effective manner.
This book comprehensively debates on the emergence of mobile cloudcomputing from cloudcomputing models. Various technological and architectural advancements in mobile and cloudcomputing have been reported. It has meticulously explored the design and architecture of computational offloading solutions in cloud and mobile cloudcomputing domains to enrich mobile user experience. Furthermore, to optimize mobile power consumption, existing solutions and policies toward green mobile computing, green cloudcomputing, green mobile networking, and green mobile cloudcomputing are briefly discussed. The book also presents numerous cloud and mobile resource allo- cation and management schemes to efficiently manage existing resources (hardware and software). Recently, integrated networks (e.g., WSN, VANET, MANET) have sig- nificantly helped mobile users to enjoy a suite of services. The book discusses existing architecture, opportunities, and challenges, while integrating mobile cloud comput- ing with existing network technologies such as sensor and vehicular networks. It also briefly expounds on various security and privacy concerns, such as application security, authentication security, data security, and intrusion detection, in the mobile cloud com- puting domain. The business aspects of mobile cloudcomputing models in terms of resource pricing models, cooperation models, and revenue sharing among cloud pro- viders are also presented in the book. To highlight the standings of mobile cloud comput- ing, various well-known, real-world applications supported by mobile cloudcomputing models are discussed. For example, the demands and issues while deploying resource- intensive applications, including face recognition, route tracking, traffic management, and mobile learning, are discussed. This book concludes with various future research directions in the mobile cloudcomputing domain to improve the strength of mobile cloudcomputing and to enrich mobile user experience.
Despite the tremendous business and technical advantages, what we shall always keep in mind is that cloudcomputing would not be our wonderland until users’ outsourced sensitive data could hide from the prying eyes. Privacy concern is one of the primary hurdles that prevent the widespread adoption of the cloud by potential users, especially if the private data of users used to reside in the local storage are to be outsourced to and computed in the cloud. Imagine that CSPs host the services looking into your personal emails, financial and medical records, and social network profiles. Although these sensitive data could be protected by deploying intrusion detection systems, firewalls, or even segmenting data in a virtualized environment, CSP possesses full control of the cloud infrastructure including the system hardware and lower levels of software stack. Privacy breach is still likely to occur owing to the existence of disgruntled, profiteered or curious employees from CSP [25, 37]. Encrypting-then-outsourcing [28,48] provides us strong guarantee that no one could mine any useful information from the ciphertext of users’ data. Many people argue that sensitive data has to be encrypted before outsourcing in order to provide user data privacy against the cloud service providers. However, encrypted data makes data utilization a very challenging task. One example is keyword search functions on the documents stored in the cloud. Without those usable data services, the cloud will become merely a remote storage which provides limited value to all parties.
At present there are few published materials on vCloud Director outside of offi cial VMware documentation, but the virtualization community has a long tradition of dedicated and passion- ate bloggers, speakers, and contributors producing timely content in easily digestible chunks. Writing a book on a new product like vCloud Director has been something of a moving target. Seeking to capitalize on the emerging cloudcomputing market. VMware has maintained an aggressive release cadence for the vCloud Director product, which is now in its second major release in three years, and we encourage the reader to use this book in conjunction with these online materials to dive deep where required. Although the core concepts and architecture will remain broadly consistent across future releases, these online resources will prove invaluable in keeping abreast of new functionality, issues, and features. This book points you to the best of them, but the best way to stay informed of breaking news in the virtualization world is to fol- low the VMware Planet v12n RSS feed (www.vmware.com/vmtn/planet/v12n/). For those of you familiar with social media tools like Twitter, the virtualization community is also active there on a daily basis.
just an initial step. Some of the pioneers in this area have already explored a new TOC model . On one hand, telecom operators can use the powerful storage and computing capabilities offered by the cloud for network management, such as billing. In this case, telecom operators are cloud users. On the other hand, telecom operators can also be cloud providers as well. For example, the telecom operators can leverage the network assets to aggregate and resell the services of third party clouds. Similar to the cloudcomputing model, for achieving low-cost media services by using the “pay-per-use” approach, mobile cloudcomputing can also adopt the utility billing model to require resources and provide Mobile Network as a Service(MNaaS). As shown in Fig. 1, TOC is in an unique position of being as a cloud “broker” between the wireless networks and the third party SPs, and can manage connectivity and offer flexibility in acquiring network resources on-demand and in real-time. There are three major roles, namely, cloud connectivity, delivery of cloud-based capabilities, and leveraging network assets to enhance cloud offerings. This TOC model can align itself in the cloud value chain . Furthermore, MNaaS can use network virtualization technique to make the connectivity much easier, since it allows the
Abstract:- By using Internet technology cloud provides virtualized IT resources as a service. CloudComputing is a combination of Grid computing and Cluster computing. By using the Internet a computer grid is created whose purpose is only utilizing shared resources such as on a pay- per-use model, computer software and hardware. The main moto of cloudcomputing is that you can access your data in any corner of the world by using internet. Cloudcomputing is a general term for delivering through the internet. Cloudcomputing is a virtualized computer power and storage delivered via platform-agnostic infrastructures of abstracted hardware and software access over internet. Cloudcomputing systems usually work on various models like public, private, hybrid, and community models.
Cloudcomputing as a concept is the result of the natural evolution of our everyday approach to using Technology delivered via the Internet. Cloudcomputing came into the forefront as a result of advances in virtualization (e.g. VMware), distributed computing with server clusters (e.g. Google) and increase in the availability of broadband Internet access. Business leaders illustrate cloudcomputing simply as the delivery of applications or IT services, which are offered by an intermediary over the Internet (Microsoft, IBM). The recent global economic recession served as a booster for interest in cloudcomputing technologies as organizations sought for ways to reduce their IT finances, while in harmony with performance and profits. The cloudcomputing buzz began in 2006 with the Launch of Amazon EC2, gaining footing in 2007.
CORBA is a specification introduced by the Object Management Group (OMG) for providing cross- platform and cross-language interoperability among distributed components. The specification was originally designed to provide an interoperation standard that could be effectively used at the indus- trial level. The current release of the CORBA specification is version 3.0 and currently the technology is not very popular, mostly because the development phase is a considerably complex task and the interoperability among components developed in different languages has never reached the proposed level of transparency. A fundamental component in the CORBA architecture is the Object Request Broker (ORB), which acts as a central object bus. A CORBA object registers with the ORB the inter- face it is exposing, and clients can obtain a reference to that interface and invoke methods on it. The ORB is responsible for returning the reference to the client and managing all the low-level operations required to perform the remote method invocation. To simplify cross-platform interoperability, inter- faces are defined in Interface Definition Language (IDL), which provides a platform-independent specification of a component. An IDL specification is then translated into a stub-skeleton pair by spe- cific CORBA compilers that generate the required client (stub) and server (skeleton) components in a specific programming language. These templates are completed with an appropriate implementation in the selected programming language. This allows CORBA components to be used across different runtime environment by simply using the stub and the skeleton that match the development language used. A specification meant to be used at the industry level, CORBA provides interoperability among different implementations of its runtime. In particular, at the lowest-level ORB implementations com- municate with each other using the Internet Inter-ORB Protocol (IIOP), which standardizes the inter- actions of different ORB implementations. Moreover, CORBA provides an additional level of abstraction and separates the ORB, which mostly deals with the networking among nodes, from the Portable Object Adapter (POA), which is the runtime environment in which the skeletons are hosted and managed. Again, the interface of these two layers is clearly defined, thus giving more freedom and allowing different implementations to work together seamlessly.
There’s growing sentiment among many cloud experts that ultimately hybrid adoption will be most ad- vantageous for many organizations. Warrilow says “for some time Gartner has advised that hybrid is the most likely scenario for most organiza- tions.” Staten agrees with the notion for two reasons. First, “some appli- cations and data sets simply aren’t a good fit with the cloud,” he says. This might be due to application architec- ture, degree of business risk (real or perceived), and cost, he says. Second, rather than making a cloud-or-no- cloud decision, “it’s more practical and effective to leverage the cloud for what makes the most sense and other deployment options where they make the most sense,” he says. In terms of strategy, Staten recommends regularly analyzing deployment decisions. “As cloud services mature, their applica- bility increases,” he says.
Two of the main customer objections actually pose potential opportunities for channel firms if they are handled well. Integration concerns about tying cloud into existing infrastructure and worries about data portability need not be deal breakers – instead they provide a chance for the solution provider to flex their value, knowledge and skill set. Being able to explain to a potential customer in detail which party ultimately “owns” data placed in the cloud, particularly in a situation where a cloud provider might go out of business or the customer falls behind on payments, demonstrates the channel firm’s knowledge of cloud-‐based models. Data portability moves from a sales obstacle to overcome to a value-‐added service to sell. Likewise with integration. For channel firms selling cloud today, the greatest source of revenue after the sale lies in integration work – cloud to on-‐premise and cloud to cloud. A proven track record here with existing customers can serve as a blueprint or proof point to persuade more reluctant customer prospects, much like case studies are used.
components. Network isolation in the cloud can be done using various techniques of network isolation such as VLAN, VXLAN, VCDNI, STT, or other such techniques. Applications are deployed in a multi-tenant environment and consist of components that are to be kept private, such as a database server which is to be accessed only from selected web servers and any other traffic from any other source is not permitted to access it. This is enabled using network isolation, port filtering, and security groups. These services help with segmenting and protecting various layers of application deployment architecture and also allow isolation of tenants from each other. The provider can use security domains, layer 3 isolation techniques to group various virtual machines. The access to these domains can be controlled using providers' port filtering capabilities or by the usage of more stateful packet filtering by implementing context switches or firewall appliances. Using network isolation techniques such as VLAN tagging and security groups allows such configuration. Various levels of virtual switches can be configured in the cloud for providing isolation to the different networks in the cloud environment.