In this study, we see that the Cloudcomputing has been widely recognized as the widely growing computing infrastructure. CC offers many advantages by allowing users to use infrastructure like servers, networks, and data storages, without impacting to the owner’s organization. In this paper we are introducing DataBase-as-a-Service (DBaas) which promises to move much of the operational burden of provisioning, configuration, scaling, performance tuning, backup, privacy, of the user’s data. Database as a Service architectures offer organizations new and unique ways to offer, use and manage database services. The fundamental differences related to service-orientation and discrete consumer-provider roles challenge conventional models yet offer the potential for significant cost savings, improved service levels and greater leverage of information across the business. As discussed in this paper, there are a variety of issues, considerations that are must to understand for effectively using DBaaS in all organizations. We introduced Relational/DatabaseCloud, a scalable relational databases-as-a-service for cloudcomputing environments. Database systems deployed on a cloudcomputing infrastructure face many new challenges such as dealing with large scalability of operations, elasticity, and autonomic control to minimize the operating cost, Continuous availability, Datasets privacy. These challenges are in addition to making the systems fault-tolerant and highly available. Relational Cloud overcomes three significant challenges: efficient multi-tenancy, elastic scalability, and database privacy
Finally, the cloud data lock-in can also be considered as security issue. The issue emerges the instant a client wishes to change cloud provider or even when a cloud provider decides to cease operation. What really happens to their data, how are they transported to the other provider and how sure is the client that the initial cloud provider erases all data and not use them adversely against them are some of the questions that arise when dealing with data security. Obviously such matters do not apply in the traditional data management as organization own, control and manage their data on premises. However, this can be of an additional burden to the conventional organization, as they need to invest more on database servers, data storage machines and security controls for keeping data safe. On the other hand, when data is handled in the cloud, less data is stored on the premises of the client and thus there is less concern for data loss.
Operating a web site that requires database access, supports considerable traffic, and possibly connects to enterprise systems requires complete control of one or more servers, to guarantee responsiveness to user requests. Servers supporting the web site must be hosted in a data center with access from the public Internet. Traditionally, this has been achieved by renting space for physical servers in a hosting center operated by a network provider far from the enterprise’s inter- nal systems. With cloudcomputing, this can now be done by renting a virtual machine in a cloud hosting center. The web site can make use of open source software, such as Apache HTTP Server, MySQL, and PHP; the so-called LAMP stack; or a Java™ stack, all of which is readily available. Alternatively, enterprises might prefer to use commercially supported software, such as Web- Sphere ® Application Server and DB2 ® , on either Linux ® or Windows operating systems. All
Several new AWS services were introduced in 2012; some of them are in a beta stage at the time of this writing. Among the new services we note: Route 53, a low-latency DNS service used to manage user’s DNS public records; Elastic MapReduce (EMR), a service supporting processing of large amounts of data using a hosted Hadoop running on EC2 and based on the MapReduce paradigm discussed in Section 4.6; Simple Workﬂow Service (SWF), which supports workflow management (see Section 4.4) and allows scheduling, management of dependencies, and coordination of multiple EC2 instances; ElastiCache, a service enabling Web applications to retrieve data from a managed in-memory caching system rather than a much slower disk-based database; DynamoDB, a scalable and low-latency fully managed NoSQL database service; CloudFront, a Web service for content delivery; and Elastic Load Balancer, a cloud service to automatically distribute the incoming requests across multiple instances of the application. Two new services, the Elastic Beanstalk and the CloudFormation, are discussed next.
The next layer within ITaaS is Platform as a Service, or PaaS. At the PaaS level, what the service providers offer is packaged IT capability, or some logical resources, such as databases, ﬁle systems, and application operating environment. Currently, actual cases in the industry include Rational Developer Cloud of IBM, Azure of Microsoft and AppEngine of Google. At this level, two core technolo- gies are involved. The ﬁrst is software development, testing and running based on cloud. PaaS service is software developer-oriented. It used to be a huge difﬁculty for developers to write programs via network in a distributed computing environ- ment, and now due to the improvement of network bandwidth, two technologies can solve this problem: the ﬁrst is online development tools. Developers can directly complete remote development and application through browser and remote console (development tools run in the console) technologies without local installation of development tools. Another is integration technology of local development tools and cloudcomputing, which means to deploy the developed application directly into cloudcomputing environment through local development tools. The second core technology is large-scale distributed application operating environment. It refers to scalable application middleware, database and ﬁle system built with a large amount of servers. This application operating environment enables appli- cation to make full use of abundant computing and storage resource in cloudcomputing center to achieve full extension, go beyond the resource limitation of single physical hardware, and meet the access requirements of millions of Internet users.
The Logistics Process Designer is a GWT  application using HTML 5  elements, running on an Apache Tomcat 16 Web server. HTML 5 is still in devel- opment but it is the future standard for modern Web pages, featuring many new functions for Browsers without the need for Plug-ins. Its architecture is shown in Fig. 4. The LPD Frontend is the client side of the LPD which displays the user interface. The Canvas of the LPD is based on the yFiles for HTML Framework  to render graphs and the GWT-DND 17 library for drag and drop features. The frontend communicates with the Persistence and Process Modeling Taxonomy (PMT) backend services. The LPD Persistence uses JPA 18 to access a Post- greSQL 19 Database. The Persistence API provides the functionality to save and load Process models as well as versioning them. Among other data, the Process Mod- eling Taxonomy stores meta information about the tenant-speci ﬁ c Process models and administers the IDs by which the models are stored in the Persistence module. The PMT is stored in an OpenLDAP 20 server and is accessed through JNDI. 21 The PMT includes all available Logistics Mall applications and services combining it with the information ’ s about the applications and services of a tenant. Spring 22 is used for con ﬁ guration and integration of the components Frontend, Persistence and PMT. The Common component contains the data transfer objects for the com- munication of the core components.
components. Network isolation in the cloud can be done using various techniques of network isolation such as VLAN, VXLAN, VCDNI, STT, or other such techniques. Applications are deployed in a multi-tenant environment and consist of components that are to be kept private, such as a database server which is to be accessed only from selected web servers and any other traffic from any other source is not permitted to access it. This is enabled using network isolation, port filtering, and security groups. These services help with segmenting and protecting various layers of application deployment architecture and also allow isolation of tenants from each other. The provider can use security domains, layer 3 isolation techniques to group various virtual machines. The access to these domains can be controlled using providers' port filtering capabilities or by the usage of more stateful packet filtering by implementing context switches or firewall appliances. Using network isolation techniques such as VLAN tagging and security groups allows such configuration. Various levels of virtual switches can be configured in the cloud for providing isolation to the different networks in the cloud environment.
of meta-data associates the data items with tenants via tags and the meta-data are used to optimize searches by channeling processing resources during a query to only those pieces of data bearing relevant unique tag. In certain aspects, each tenant’s virtual schema includes a variety of customizable fields, some or all of which may be designated as indexable. One goal of traditional multiple query optimizer is to mini- mize the amount of data that must be read from disk and choose selective tables or columns that will yield the fewest rows during the processing. If the optimizer knows that a certain column has a very high cardinal- ity, it will choose to use an index on that column instead of a similar index on a lower cardinality column. However, consider in a multiten- ant system that a physical column has a large number of distinct val- ues for most tenants, but a small number of distinct values for specific tenant. Then, the overall high-cardinality column strategy will not get a better performance because the optimizer is unaware that for this spe- cific tenant, the column is not selective. Furthermore, by using system- wide aggregate statistics, the optimizer might choose a query plan that is incorrect or inefficient for a single tenant that does not conform to the “normal” average of the entire database as determined from the gathered statistics. Therefore, the first phase typically includes generating tenant- level and user-level statistics to find the suitable tables or columns for the common subexpressions. The statistics gathered includes the information in entity rows for tenants being tracked to make decisions about query access paths and a list of users to have access to privileged data. The sec- ond phase constructs an optimal plan based on query graph. The differ- ence is that some edges are labeled directed and single node consists of multiple relations considering the private security model to keep data or application separate. The common subexpressions of the first phase are stored by building many-to-many (MTM) physical table, which can also specify whether a user has access to a particular entity row. When handling multiple queries for entity rows that the current user can see, the optimizer must choose between accessing MTM table from the user and the entity side of the relationship.
A health care system is a smart information system that can provide people with some basic health monitoring and physiological index analysis services. It is hard to share with isolated professional medical services such as PACs (picture archiving and communication systems), EHRs (electronic health records), and HISs (hospital information systems) without Internet-based technologies. Not long ago, this kind of system usually was implemented with a traditional MIS (management information system) mode, which is not capable of implementing sufficient health care services on a uniform platform, even though it may exploit several isolated Internet technolo- gies. Currently, cloudcomputing, as an emerging state-of-the-art informa- tion technology (IT) platform, can provide economical and on-demand services for customers. It provides characteristics of high performance and transparent features to end users that can fulfill the flexibility and scalabil- ity of service-oriented systems. Such a system can meet the infrastructure demand for the health care system. With the rapid progress of cloud capac- ity, increasing applications and services are provided as anything as a ser- vice (XaaS) mode (e.g., security as a service, testing as a service, database as a service, and even everything as a service)  . Google Docs, Amazon S3
B. Platform as a Service (PaaS) - In a e-commerce website, shopping cart, checkout and payment mechanism which are running on all Merchant’s servers are the example of PaaS. This is a cloud base environment, which you use to develop, test, run and manage our application. This service includes web servers, Dev Tools, Execution Runtime and online database. Platform-as-a-service (PaaS) refers to the services of Cloudcomputing that supply an on-demand environment for developing. Its approach is to give development environments according to our need, without the complexity of purchasing, creating or managing basic
efficiency for data centers and large-scale multimedia services. The paper also highlights important challenges in designing and maintaining green data centers and identifies some of the opportunities in offering green streaming service in cloudcomputing frameworks. Zhang Mian  presented the study that describes the cloudcomputing-based multimedia database and the different traditional database, object-oriented database model of the concept, discusses the cloud-based object-oriented multimedia database of two ways, and summarized the characteristics of such multimedia database model, superiority and development. Chun-Ting Huang  conduct a depth survey on recent multimedia storage security research activities in association with cloudcomputing. Neha Jain  presented a data security system in cloudcomputing using DES algorithm. This Cipher Block Chaining system is to be secure for clients and server. The security architecture of the system is designed by using DES cipher block chaining, which eliminates the fraud that occurs today with stolen data. Results in order to be secure the system the communication between modules is encrypted using symmetric key.
Bob Evans, senior vice president at Oracle, further wrote some revealing facts in his blog post at Forbes, clearing away all doubts people may have had in their minds. You may like to consider some interesting facts in this regard. Almost eight years ago, when cloud terms were not yet established, Oracle started developing a new generation of application suites (called Fusion Applications) designed for all modes of cloud deployment. Oracle Database 12c, which was released just recently, supports the cloud deployment framework of major data centers today and is the outcome of development efforts of the past few years. Oracle’s software as a service (SaaS) revenue has already exceeded the $1 billion mark, and it is the only company today to offer all levels of cloud services, such as SaaS, platform as a service (PaaS), and infrastructure as a service (IaaS). Oracle has helped over 10,000 customers to reap the beneﬁts of the cloud infrastructure and now supports over 25,000 users globally. One may argue that this could not have been possible if Larry Ellison hadn’t appreciated cloudcomputing. Sure, we may understand the dilemma he must have faced as an innovator when these emerging technologies were creating disruption in the business (www.forbes.com/sites/oracle/2013/01/18/oracle-cloud-10000- customers-and-25-million-users/).
techniques that support not only data privacy, but also the privacy of the accesses that users make on such data. This problem has been traditionally addressed by Private Information Retrieval (PIR) proposals (e.g., ), which provide protocols for querying a database that prevent the external server from inferring which data are being accessed. PIR solutions however have high computational complexity, and alternative approaches have been proposed. These novel approaches rely on the Oblivious RAM structure (e.g., [33,47, 48]) or on the definition of specific tree- based data structures combined with a dynamic allocation of the data (e.g., [29,30]). The goal is to support the access to a collection of encrypted data while preserving access and pattern confidentiality, meaning that an observer can infer neither what data are accessed nor whether two accesses aim to the same data. Besides protecting access and pattern confidentiality, it is also necessary to design mechanisms for protecting the integrity and authenticity of the computations, that is, to guarantee the correctness, completeness, and freshness of query results. Most of the techniques that can be adopted for verifying the integrity of query results operate on a single relation and are based on the idea of complementing the data with additional data structures (e.g., Merkle trees) or of introducing in the data collection fake tuples that can be efficiently checked to detect incorrect or incomplete results (e.g., [41,46,50– 52]). Interesting aspects that need further analysis are related to the design of efficient techniques able to verify the completeness and correctness of the results of complex queries (e.g., join operations among multiple relations, possibly stored and managed by different cloud servers with different levels of trust).
Relational databases are great for online transaction processing (OLTP) activities because they guarantee that transactions are processed successfully in order for the data to get stored in the database. In addition, relational databases have superior security features and a powerful querying engine. Over the last several years, NoSQL databases have soared in popularity mainly due to two reasons: the increasing amount of data being stored and access to elastic cloudcomputing resources. Disk solutions have become much cheaper and faster, which has led to companies storing more data than ever before. It is not uncommon for a company to have petabytes of data in this day and age. Normally, large amounts of data like this are used to perform analytics, data mining, pattern recognition, machine learning, and other tasks. Companies can leverage the cloud to provision many servers to distribute workloads across many nodes to speed up the analysis and then deprovision all of the servers when the analysis is finished.
Two of the main customer objections actually pose potential opportunities for channel firms if they are handled well. Integration concerns about tying cloud into existing infrastructure and worries about data portability need not be deal breakers – instead they provide a chance for the solution provider to flex their value, knowledge and skill set. Being able to explain to a potential customer in detail which party ultimately “owns” data placed in the cloud, particularly in a situation where a cloud provider might go out of business or the customer falls behind on payments, demonstrates the channel firm’s knowledge of cloud-‐based models. Data portability moves from a sales obstacle to overcome to a value-‐added service to sell. Likewise with integration. For channel firms selling cloud today, the greatest source of revenue after the sale lies in integration work – cloud to on-‐premise and cloud to cloud. A proven track record here with existing customers can serve as a blueprint or proof point to persuade more reluctant customer prospects, much like case studies are used.
Commonly, agility, delivery speed, and cost savings entice companies to public clouds. Public cloud, for example, can free a company from having to invest in consolidating, expanding, or building a new data center when it outgrows a current facility, Kavis says. IT really doesn’t “want to go back to the well and ask management for another several mil- lion dollars,” thus it dives into the public cloud, he says. Stadtmueller says the public cloud is the least ex- pensive way to access compute and storage capacity. Plus, it’s budget- friendly because up-front infra- structure capital investments aren’t required. Businesses can instead align expenses with their revenue and grow capacity as needed. This is one reason why numerous startups choose all- public-cloud approaches.
Lean agile development methodologies and the cloud model complement each other very well. Cloud services take pride in meeting user requirements rapidly, delivering applications whenever and to whatever extent they are needed. Agile methods give high credence to user collaboration in requirements discovery. The lean agile system of software development aims to break down project require- ments into small and achievable segments. This approach guarantees user feedback on every task of the project. Segments can be planned, developed, and tested individually to maintain high-quality standards without any major bottlenecks. The development stage of every component thus becomes a single “iteration” process. Moreover, lean agile software methods place huge emphasis on developing a collaborative relationship between application developers and end users. The entire development process is transparent to the end user and feedback is sought at all stages of development, and the needy changes are made accordingly then and there. Using lean agile development in conjunction with the cloud paradigm provides a highly interactive and collaborative environment. The moment developers fi nalize a feature, they can push it as a cloud service; users can review it instantly and provide valuable feedback. Thus, a lengthy feedback cycle can be eliminated thereby reducing the probability of misstated or misunderstood requirements. This considerably curtails the time and efforts for the software development organization while increasing end-user satisfaction. Following the lean agile approach of demand- driven production, end users ’ needs are integrated in a more cohesive and effi cient manner with software delivery as cloud services. This approach stimulates and sustains a good amount of innovation, requirement discovery, and validation in cloudcomputing.
The main aim of this work is to present a difference between Grid Computing and CloudComputing. Cloudcomputing has many advantages over Grid Computing, clouds will not replace grids, as grids have not replaced capability HPC, over the last 10 years as some have predicated. All three technologies have their place, what we will see over the next couple of years is that these different computing nodes will more and more grow together with the WWW and the Internet, until all these resources become one global infrastructure for information, Knowledge, computation and communication, the WWW. We think it is more likely that grids will be re-branded or merge into cloudcomputing, Grid Computing helped create a certain technology reality which made clouds possible. And when it comes to IaaS (infrastructure as s service), We think in five years something like 80 to 90 percent of the computation are doing could be cloud-based. In a word, the concept of CloudComputing is becoming more and more popular. Now, CloudComputing is in the beginning stage. All kinds of companies are providing all kinds of Cloudcomputing service, from software application to net storage and mail filter. We believe cloudcomputing will become main technology in our information life. Cloud has owned all conditions. Now the dream of Grid Computing will be realized by CloudComputing. It will be a great event in the IT history .
A common option for reducing the operating costs of only sporadically used IT infra- structure, such as in the case of the “warm standby” , is CloudComputing. As defined by NIST , CloudComputing provides the user with a simple, direct access to a pool of configurable, elastic computing resources (e.g. networks, servers, storage, applications, and other services, with a pay-per-use pricing model). More specifically, this means that resources can be quickly (de-)provisioned by the user with minimal provider interaction and are also billed on the basis of actual consumption. This pric- ing model makes CloudComputing a well-suited platform for hosting a replication site offering high availability at a reasonable price. Such a warm standby system with infrastructure resources (virtual machines, images, etc.) being located and updated in the Cloud is herein referred to as a “Cloud-Standby-System”. The relevance and po- tential of this cloud-based option for hosting replication systems gets even more ob- vious in the light of the current situation in the market. Only fifty percent of small and medium enterprises currently practice BCM with regard to their IT-services while downtime costs sum up to $12,500-23,000 per day for them .
In other cases, the loss of control of where your virtual IT infrastructure resides could open the way to other problematic situations. More precisely, the geographical location of a datacenter gen- erally determines the regulations that are applied to management of digital information. As a result, according to the specific location of data, some sensitive information can be made accessible to government agencies or even considered outside the law if processed with specific cryptographic techniques. For example, the USA PATRIOT Act 5 provides its government and other agencies with virtually limitless powers to access information, including that belonging to any company that stores information in the U.S. territory. Finally, existing enterprises that have large computing infra- structures or large installed bases of software do not simply want to switch to public clouds, but they use the existing IT resources and optimize their revenue. All these aspects make the use of a public computing infrastructure not always possible. Yet the general idea supported by the cloudcomputing vision can still be attractive. More specifically, having an infrastructure able to deliver IT services on demand can still be a winning solution, even when implemented within the private premises of an institution. This idea led to the diffusion of private clouds, which are similar to pub- lic clouds, but their resource-provisioning model is limited within the boundaries of an organization.