In this study, we see that the Cloudcomputing has been widely recognized as the widely growing computing infrastructure. CC offers many advantages by allowing users to use infrastructure like servers, networks, and data storages, without impacting to the owner’s organization. In this paper we are introducing DataBase-as-a-Service (DBaas) which promises to move much of the operational burden of provisioning, configuration, scaling, performance tuning, backup, privacy, of the user’s data. Database as a Service architectures offer organizations new and unique ways to offer, use and manage database services. The fundamental differences related to service-orientation and discrete consumer-provider roles challenge conventional models yet offer the potential for significant cost savings, improved service levels and greater leverage of information across the business. As discussed in this paper, there are a variety of issues, considerations that are must to understand for effectively using DBaaS in all organizations. We introduced Relational/DatabaseCloud, a scalable relational databases-as-a-service for cloudcomputing environments. Database systems deployed on a cloudcomputing infrastructure face many new challenges such as dealing with large scalability of operations, elasticity, and autonomic control to minimize the operating cost, Continuous availability, Datasets privacy. These challenges are in addition to making the systems fault-tolerant and highly available. Relational Cloud overcomes three significant challenges: efficient multi-tenancy, elastic scalability, and database privacy
Finally, the cloud data lock-in can also be considered as security issue. The issue emerges the instant a client wishes to change cloud provider or even when a cloud provider decides to cease operation. What really happens to their data, how are they transported to the other provider and how sure is the client that the initial cloud provider erases all data and not use them adversely against them are some of the questions that arise when dealing with data security. Obviously such matters do not apply in the traditional data management as organization own, control and manage their data on premises. However, this can be of an additional burden to the conventional organization, as they need to invest more on database servers, data storage machines and security controls for keeping data safe. On the other hand, when data is handled in the cloud, less data is stored on the premises of the client and thus there is less concern for data loss.
Operating a web site that requires database access, supports considerable traffic, and possibly connects to enterprise systems requires complete control of one or more servers, to guarantee responsiveness to user requests. Servers supporting the web site must be hosted in a data center with access from the public Internet. Traditionally, this has been achieved by renting space for physical servers in a hosting center operated by a network provider far from the enterprise’s inter- nal systems. With cloudcomputing, this can now be done by renting a virtual machine in a cloud hosting center. The web site can make use of open source software, such as Apache HTTP Server, MySQL, and PHP; the so-called LAMP stack; or a Java™ stack, all of which is readily available. Alternatively, enterprises might prefer to use commercially supported software, such as Web- Sphere ® Application Server and DB2 ® , on either Linux ® or Windows operating systems. All
Bob Evans, senior vice president at Oracle, further wrote some revealing facts in his blog post at Forbes, clearing away all doubts people may have had in their minds. You may like to consider some interesting facts in this regard. Almost eight years ago, when cloud terms were not yet established, Oracle started developing a new generation of application suites (called Fusion Applications) designed for all modes of cloud deployment. Oracle Database 12c, which was released just recently, supports the cloud deployment framework of major data centers today and is the outcome of development efforts of the past few years. Oracle’s software as a service (SaaS) revenue has already exceeded the $1 billion mark, and it is the only company today to offer all levels of cloud services, such as SaaS, platform as a service (PaaS), and infrastructure as a service (IaaS). Oracle has helped over 10,000 customers to reap the beneﬁts of the cloud infrastructure and now supports over 25,000 users globally. One may argue that this could not have been possible if Larry Ellison hadn’t appreciated cloudcomputing. Sure, we may understand the dilemma he must have faced as an innovator when these emerging technologies were creating disruption in the business (www.forbes.com/sites/oracle/2013/01/18/oracle-cloud-10000- customers-and-25-million-users/).
These storage servers provide volume storage to the running virtual machines. The volumes can be the root device or other data volumes attached to the machine. These extra volumes can be attached to the virtual machine dynamically. Except for Oracle VM hypervisor, data volumes can be attached to the guest dynamically only when the machine is in stopped state. After the VM instance is destroyed, the VM instance is disabled from the start action, the VM instance is still in the database and volume is not yet deleted. The destroyed VM can also be recovered. After the destroyed state, there is an expunged state which signifies permanent deletion of the volume. The time for expunging is stored as 86400 seconds by default, but this can be changed. The administrator sets up and configures the iSCSI on the host not only the first time but also during the recovery of the host from a failure—whenever there is a host failure, the administrator has to set up and configure the iSCSI LUNs on the host again. In case of XenServer, it uses clustered logical volume manager to store the VM images on iSCSI and fiber channel volumes. In this case CloudStack can support over provisioning only if the storage server itself allows that, otherwise over-provisioning is not supported. In case of KVM, shared mount point storage is supported but the mount path must be same across all the hosts in the cluster. The administrator must ensure that the storage is available otherwise CloudStack will not attempt to mount or unmount the storage.
Apart from the vendor-specific migration methodologies and guidelines, there are also proposals independent from a specific cloud provider. Reddy and Kumar proposed a methodology for data migration that consists of the following phases: design, extraction, cleansing, import, and verification. Moreover, they categorized data migration into storage migration, database migration, application migration, business process migration, and digital data retention (Reddy and Kumar, 2011). In our proposal, we focus on the storage and database migration as we address the database layer. Morris specifies four golden rules of data migration with the conclusion that the IT staff does not often know about the semantics of the data to be migrated, which causes a lot of overhead effort (Morris, 2012). With our proposal of a step-by-step methodology, we provide detailed guidance and recom- mendations on both data migration and required application refactoring to minimize this overhead. Tran et al. adapted the function point method to estimate the costs of cloud migration projects and classified the applications potentially migrated to the cloud (Tran et al., 2011). As our assumption is that the decision to migrate to the cloud has already been taken, we do not con- sider aspects such as costs. We abstract from the classification of applications to define the cloud data migration scenarios and reuse distinctions, such as complete or partial migration to refine a chosen migration scenario.
Relational databases are great for online transaction processing (OLTP) activities because they guarantee that transactions are processed successfully in order for the data to get stored in the database. In addition, relational databases have superior security features and a powerful querying engine. Over the last several years, NoSQL databases have soared in popularity mainly due to two reasons: the increasing amount of data being stored and access to elastic cloudcomputing resources. Disk solutions have become much cheaper and faster, which has led to companies storing more data than ever before. It is not uncommon for a company to have petabytes of data in this day and age. Normally, large amounts of data like this are used to perform analytics, data mining, pattern recognition, machine learning, and other tasks. Companies can leverage the cloud to provision many servers to distribute workloads across many nodes to speed up the analysis and then deprovision all of the servers when the analysis is finished.
Several new AWS services were introduced in 2012; some of them are in a beta stage at the time of this writing. Among the new services we note: Route 53, a low-latency DNS service used to manage user’s DNS public records; Elastic MapReduce (EMR), a service supporting processing of large amounts of data using a hosted Hadoop running on EC2 and based on the MapReduce paradigm discussed in Section 4.6; Simple Workﬂow Service (SWF), which supports workflow management (see Section 4.4) and allows scheduling, management of dependencies, and coordination of multiple EC2 instances; ElastiCache, a service enabling Web applications to retrieve data from a managed in-memory caching system rather than a much slower disk-based database; DynamoDB, a scalable and low-latency fully managed NoSQL database service; CloudFront, a Web service for content delivery; and Elastic Load Balancer, a cloud service to automatically distribute the incoming requests across multiple instances of the application. Two new services, the Elastic Beanstalk and the CloudFormation, are discussed next.
of meta-data associates the data items with tenants via tags and the meta-data are used to optimize searches by channeling processing resources during a query to only those pieces of data bearing relevant unique tag. In certain aspects, each tenant’s virtual schema includes a variety of customizable fields, some or all of which may be designated as indexable. One goal of traditional multiple query optimizer is to mini- mize the amount of data that must be read from disk and choose selective tables or columns that will yield the fewest rows during the processing. If the optimizer knows that a certain column has a very high cardinal- ity, it will choose to use an index on that column instead of a similar index on a lower cardinality column. However, consider in a multiten- ant system that a physical column has a large number of distinct val- ues for most tenants, but a small number of distinct values for specific tenant. Then, the overall high-cardinality column strategy will not get a better performance because the optimizer is unaware that for this spe- cific tenant, the column is not selective. Furthermore, by using system- wide aggregate statistics, the optimizer might choose a query plan that is incorrect or inefficient for a single tenant that does not conform to the “normal” average of the entire database as determined from the gathered statistics. Therefore, the first phase typically includes generating tenant- level and user-level statistics to find the suitable tables or columns for the common subexpressions. The statistics gathered includes the information in entity rows for tenants being tracked to make decisions about query access paths and a list of users to have access to privileged data. The sec- ond phase constructs an optimal plan based on query graph. The differ- ence is that some edges are labeled directed and single node consists of multiple relations considering the private security model to keep data or application separate. The common subexpressions of the first phase are stored by building many-to-many (MTM) physical table, which can also specify whether a user has access to a particular entity row. When handling multiple queries for entity rows that the current user can see, the optimizer must choose between accessing MTM table from the user and the entity side of the relationship.
The next layer within ITaaS is Platform as a Service, or PaaS. At the PaaS level, what the service providers offer is packaged IT capability, or some logical resources, such as databases, ﬁle systems, and application operating environment. Currently, actual cases in the industry include Rational Developer Cloud of IBM, Azure of Microsoft and AppEngine of Google. At this level, two core technolo- gies are involved. The ﬁrst is software development, testing and running based on cloud. PaaS service is software developer-oriented. It used to be a huge difﬁculty for developers to write programs via network in a distributed computing environ- ment, and now due to the improvement of network bandwidth, two technologies can solve this problem: the ﬁrst is online development tools. Developers can directly complete remote development and application through browser and remote console (development tools run in the console) technologies without local installation of development tools. Another is integration technology of local development tools and cloudcomputing, which means to deploy the developed application directly into cloudcomputing environment through local development tools. The second core technology is large-scale distributed application operating environment. It refers to scalable application middleware, database and ﬁle system built with a large amount of servers. This application operating environment enables appli- cation to make full use of abundant computing and storage resource in cloudcomputing center to achieve full extension, go beyond the resource limitation of single physical hardware, and meet the access requirements of millions of Internet users.
The Logistics Process Designer is a GWT  application using HTML 5  elements, running on an Apache Tomcat 16 Web server. HTML 5 is still in devel- opment but it is the future standard for modern Web pages, featuring many new functions for Browsers without the need for Plug-ins. Its architecture is shown in Fig. 4. The LPD Frontend is the client side of the LPD which displays the user interface. The Canvas of the LPD is based on the yFiles for HTML Framework  to render graphs and the GWT-DND 17 library for drag and drop features. The frontend communicates with the Persistence and Process Modeling Taxonomy (PMT) backend services. The LPD Persistence uses JPA 18 to access a Post- greSQL 19 Database. The Persistence API provides the functionality to save and load Process models as well as versioning them. Among other data, the Process Mod- eling Taxonomy stores meta information about the tenant-speci ﬁ c Process models and administers the IDs by which the models are stored in the Persistence module. The PMT is stored in an OpenLDAP 20 server and is accessed through JNDI. 21 The PMT includes all available Logistics Mall applications and services combining it with the information ’ s about the applications and services of a tenant. Spring 22 is used for con ﬁ guration and integration of the components Frontend, Persistence and PMT. The Common component contains the data transfer objects for the com- munication of the core components.
B. Platform as a Service (PaaS) - In a e-commerce website, shopping cart, checkout and payment mechanism which are running on all Merchant’s servers are the example of PaaS. This is a cloud base environment, which you use to develop, test, run and manage our application. This service includes web servers, Dev Tools, Execution Runtime and online database. Platform-as-a-service (PaaS) refers to the services of Cloudcomputing that supply an on-demand environment for developing. Its approach is to give development environments according to our need, without the complexity of purchasing, creating or managing basic
efficiency for data centers and large-scale multimedia services. The paper also highlights important challenges in designing and maintaining green data centers and identifies some of the opportunities in offering green streaming service in cloudcomputing frameworks. Zhang Mian  presented the study that describes the cloudcomputing-based multimedia database and the different traditional database, object-oriented database model of the concept, discusses the cloud-based object-oriented multimedia database of two ways, and summarized the characteristics of such multimedia database model, superiority and development. Chun-Ting Huang  conduct a depth survey on recent multimedia storage security research activities in association with cloudcomputing. Neha Jain  presented a data security system in cloudcomputing using DES algorithm. This Cipher Block Chaining system is to be secure for clients and server. The security architecture of the system is designed by using DES cipher block chaining, which eliminates the fraud that occurs today with stolen data. Results in order to be secure the system the communication between modules is encrypted using symmetric key.
Mobility is one of the main issues of MCC, as mobile devices are present here. One par- ticular position may be suitable for a device but, due to change of location, services should not be interrupted. Mobility is one of the reasons for disconnection. In mobility man- agement, localization is very important and it can be achieved using two techniques: infrastructure-based and peer-based. Infrastructure-based techniques use GSM, Wi-Fi, ultra sound RF, GPS, and RFID, which are less suitable for the needs of mobile cloud devices. On the other hand, peer-based techniques are more suited to manage mobility, considering that relative location is adequate and can be implemented with low-range pro- tocols such as Bluetooth. Escort  represents a peer-based technique to localize without using GPS or Wi-Fi, which are power-consuming applications. Here, social encounters between users are monitored by audio signaling and the walking traits of individuals by phone compasses and accelerometers. Here, various routes are created by various encoun- ters. For example, if X wants to locate Y and if X had met Z recently and Z had met Y, the route is first calculated to the point where X met Z, and then to the place where Z met Y. There will be many possible paths but the optimal one is chosen. Thus a mobile device can be localized when it is in a mobile cloud.
Abstract The surging demand for inexpensive and scalable IT infrastructures has led to the widespread adoption of Cloudcomputing architectures. These architec- tures have therefore reached their momentum due to inherent capacity of simplifi ca- tion in IT infrastructure building and maintenance, by making related costs easily accountable and paid on a pay-per-use basis. Cloud providers strive to host as many service providers as possible to increase their economical income and, toward that goal, exploit virtualization techniques to enable the provisioning of multiple virtual machines (VMs), possibly belonging to different service providers, on the same host. At the same time, virtualization technologies enable runtime VM migration that is very useful to dynamically manage Cloud resources. Leveraging these fea- tures, data center management infrastructures can allocate running VMs on as few hosts as possible, so to reduce total power consumption by switching off not required servers. This chapter presents and discusses management infrastructures for power- effi cient Cloud architectures. Power effi ciency relates to the amount of power required to run a particular workload on the Cloud and pushes toward greedy con- solidation of VMs. However, because Cloud providers offer Service-Level Agreements (SLAs) that need to be enforced to prevent unacceptable runtime per- formance, the design and the implementation of a management infrastructure for power-effi cient Cloud architectures are extremely complex tasks and have to deal with heterogeneous aspects, e.g., SLA representation and enforcement, runtime reconfi gurations, and workload prediction. This chapter aims at presenting the cur- rent state of the art of power-effi cient management infrastructure for Cloud, by care- fully considering main realization issues, design guidelines, and design choices. In addition, after an in-depth presentation of related works in this area, it presents some novel experimental results to better stress the complexities introduced by power-effi cient management infrastructure for Cloud.
There’s growing sentiment among many cloud experts that ultimately hybrid adoption will be most ad- vantageous for many organizations. Warrilow says “for some time Gartner has advised that hybrid is the most likely scenario for most organiza- tions.” Staten agrees with the notion for two reasons. First, “some appli- cations and data sets simply aren’t a good fit with the cloud,” he says. This might be due to application architec- ture, degree of business risk (real or perceived), and cost, he says. Second, rather than making a cloud-or-no- cloud decision, “it’s more practical and effective to leverage the cloud for what makes the most sense and other deployment options where they make the most sense,” he says. In terms of strategy, Staten recommends regularly analyzing deployment decisions. “As cloud services mature, their applica- bility increases,” he says.
Case in point? Six in 10 channel firms say that cloud has generally strengthened their customer relationships, with just 15% claiming it has weakened them and roughly a quarter that said that their client bonds have remained the same. This is encouraging news given the fact that many in the channel have feared publicly that cloud would drive a wedge between them and their customers. There’s been rampant apprehension about such ill effects as a resurgence in vendor direct sales and end user customers choosing a self-‐service model for their IT solutions, i.e. procuring SaaS applications over the Internet. And while both of these trends are happening to a certain extent, CompTIA data suggest not at such dire expense to most of the channel, especially those that have reached a high level of cloud maturity today and intend to remain committed. That said, not all channel firms that adopt cloud will engender more good will with customers; some may simply have a customer set that is not cloud-‐ friendly, others may not gain sufficient expertise to provide value, etc.
A common option for reducing the operating costs of only sporadically used IT infra- structure, such as in the case of the “warm standby” , is CloudComputing. As defined by NIST , CloudComputing provides the user with a simple, direct access to a pool of configurable, elastic computing resources (e.g. networks, servers, storage, applications, and other services, with a pay-per-use pricing model). More specifically, this means that resources can be quickly (de-)provisioned by the user with minimal provider interaction and are also billed on the basis of actual consumption. This pric- ing model makes CloudComputing a well-suited platform for hosting a replication site offering high availability at a reasonable price. Such a warm standby system with infrastructure resources (virtual machines, images, etc.) being located and updated in the Cloud is herein referred to as a “Cloud-Standby-System”. The relevance and po- tential of this cloud-based option for hosting replication systems gets even more ob- vious in the light of the current situation in the market. Only fifty percent of small and medium enterprises currently practice BCM with regard to their IT-services while downtime costs sum up to $12,500-23,000 per day for them .
SOA is widely considered to be an enabling technology for cloud comput- ing. In the case of cloudcomputing, it requires high degree of encapsula- tion. There should not be any hard dependencies on resource location in order to achieve the true virtualization and elasticity in cloud. Also, threads of execution of various users should be properly isolated in cloud, as any vulnerability will result in the information or data of one user being leaked into another consumer. The web services standards (WS*) used in SOA are also used in the cloudcomputing domain for solving various issues, such as asynchronous messaging, metadata exchange, and event handling. SOA is an architectural style that is really agnostic of the technology standards adopted in the assembly of composite applications. The service orientation provided by SOA helps in the software design using different pieces of software, each providing separate application functionalities as services to other applications. This feature is independent of any platform, vendor, or technology. Services can be combined by other software applications to provide the complete functionality of a large software application. SOA makes the cooperation of computers connected over a network easy. An arbitrary number of services could be run on a computer, and each service can communicate with any other service in the network without human interaction and also without the need to make any modification to the underlying program itself. Within an SOA, services use defined protocols for transferring and interpreting the messages. WSDL is used to describe the services. The SOAP protocol is used to describe the communications protocols.
The Heartbeat Service periodically collects the dynamic performance information about the node and publishes this information to the membership service in the Aneka Cloud. These data are collected by the index node of the Cloud, which makes them available for services such as reserva- tions and scheduling in order to optimize the use of a heterogeneous infrastructure. As already dis- cussed, basic information about memory, disk space, CPU, and operating system is collected. Moreover, additional data are pulled into the “alive” message, such as information about the installed software in the system and any other useful information. More precisely, the infrastructure has been designed to carry over any type of data that can be expressed by means of text-valued properties. As previously noted, the information published by the Heartbeat Service is mostly con- cerned with the properties of the node. A specific component, called Node Resolver, is in charge of collecting these data and making them available to the Heartbeat Service. Aneka provides different implementations for such component in order to cover a wide variety of hosting environments. A variety of operating systems are supported with different implementations of the PAL, and differ- ent node resolvers allow Aneka to capture other types of data that do not strictly depend on the hosting operating system. For example, the retrieval of the public IP of the node is different in the case of physical machines or virtual instances hosted in the infrastructure of an IaaS provider such as EC2 or GoGrid. In virtual deployment, a different node resolver is used so that all other compo- nents of the system can work transparently.