The security groups provide IP address filtering and have rules defined within it which define the incoming and outgoing traffic characteristics, like for example, the protocol, port and the source of the communication which is to be allowed or denied. CloudStack provides a default security group which has some rules predefined with itself and it can be modified whenever required. This default security group has rules defined to deny all incoming traffic and allow all outbound traffic. Users can modify this security group or define new ones as per their requirements. An instance is associated with a security group when it is created and it cannot be changed afterwards, this security group can be the default one or any other security group created by the users. The security group associated with the instance defines the traffic that will be allowed or denied for that instance and it can be modified at any point of time and the new configuration will define the traffic rules from that point irrespective of the VM's state (stopped/started).
A health care system is a smart information system that can provide people with some basic health monitoring and physiological index analysis services. It is hard to share with isolated professional medical services such as PACs (picture archiving and communication systems), EHRs (electronic health records), and HISs (hospital information systems) without Internet-based technologies. Not long ago, this kind of system usually was implemented with a traditional MIS (management information system) mode, which is not capable of implementing sufficient health care services on a uniform platform, even though it may exploit several isolated Internet technolo- gies. Currently, cloudcomputing, as an emerging state-of-the-art informa- tion technology (IT) platform, can provide economical and on-demand services for customers. It provides characteristics of high performance and transparent features to end users that can fulfill the flexibility and scalabil- ity of service-oriented systems. Such a system can meet the infrastructure demand for the health care system. With the rapid progress of cloud capac- ity, increasing applications and services are provided as anything as a ser- vice (XaaS) mode (e.g., security as a service, testing as a service, database as a service, and even everything as a service)  . Google Docs, Amazon S3
Hadoop MapReduce and the LexisNexis HPCC platform are both scalable archi- tectures directed towards data-intensive computing solutions. Each of these system platforms has strengths and weaknesses and their overall effectiveness for any appli- cation problem or domain is subjective in nature and can only be determined through careful evaluation of application requirements versus the capabilities of the solution. Hadoop is an open source platform which increases its ﬂexibility and adaptability to many problem domains since new capabilities can be readily added by users adopt- ing this technology. However, as with other open source platforms, reliability and support can become issues when many different users are contributing new code and changes to the system. Hadoop has found favor with many large Web-oriented companies including Yahoo!, Facebook, and others where data-intensive computing capabilities are critical to the success of their business. Amazon has implemented new cloudcomputing services using Hadoop as part of its EC2 called Amazon Elastic MapReduce. A company called Cloudera was recently formed to provide training, support and consulting services to the Hadoop user community and to pro- vide packaged and tested releases which can be used in the Amazon environment. Although many different application tools have been built on top of the Hadoop platform like Pig, HBase, Hive, etc., these tools tend not to be well-integrated offer- ing different command shells, languages, and operating characteristics that make it more difﬁcult to combine capabilities in an effective manner.
As a result of the characteristics of the homomorphic encryption, carry out ciphertext operations and recovery on the cloud directly guarantees the security of cloudcomputing data and avoids the problem of the efficiency of the traditional encrypted data. Therefore, a practical fully homomorphic encryption scheme derived from Gentry cryptosystem, by means of merely basic modular arithmetic is proposed to ensure the privacy-preserving in cloud storage, which excellently realizes the need of ciphertext retrieval and other processing in untrusted servers. Homomorphic encryption systems evolved significantly in the last couple of years since the design of first fully homomorphic encryption system. However, all those systems are still to impractical for real- world applications. Because of that, it makes it interesting problem not only from educational viewpoint, but also from the industrial one. This will lead to the design of more efficient homomorphic encryption systems in the following years. Naturally, since there are applications where somewhat homomorphic encryption systems would be powerful enough, it seems we are closer to that goal than it may look like.
Case in point? Six in 10 channel firms say that cloud has generally strengthened their customer relationships, with just 15% claiming it has weakened them and roughly a quarter that said that their client bonds have remained the same. This is encouraging news given the fact that many in the channel have feared publicly that cloud would drive a wedge between them and their customers. There’s been rampant apprehension about such ill effects as a resurgence in vendor direct sales and end user customers choosing a self-‐service model for their IT solutions, i.e. procuring SaaS applications over the Internet. And while both of these trends are happening to a certain extent, CompTIA data suggest not at such dire expense to most of the channel, especially those that have reached a high level of cloud maturity today and intend to remain committed. That said, not all channel firms that adopt cloud will engender more good will with customers; some may simply have a customer set that is not cloud-‐ friendly, others may not gain sufficient expertise to provide value, etc.
Abstract: Nowadays, information Systems play effective role in the organizations in a way that they cannot be imagined without these systems. The practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer. The “cloud” in cloudcomputing originated from the habit of drawing the internet as a fluffy cloud in network diagrams. No wonder the most popular meaning of cloudcomputing refers to running workloads over the internet remotely in a commercial provider’s data center -called “public cloud” model. Cloudcomputing, often referred to as simply “the cloud,” is the delivery of on-demand computing resources—everything from applications to data centers over the internet on a pay-for-use basis. Moving to the cloud, Running in the cloud and Stored in the cloud. CloudComputing initiatives could affect the enterprises within two to three years as it has the potential to significantly change IT. Cloudcomputing can be identified as a technology that uses the Internet to deliver its services.
Commonly, agility, delivery speed, and cost savings entice companies to public clouds. Public cloud, for example, can free a company from having to invest in consolidating, expanding, or building a new data center when it outgrows a current facility, Kavis says. IT really doesn’t “want to go back to the well and ask management for another several mil- lion dollars,” thus it dives into the public cloud, he says. Stadtmueller says the public cloud is the least ex- pensive way to access compute and storage capacity. Plus, it’s budget- friendly because up-front infra- structure capital investments aren’t required. Businesses can instead align expenses with their revenue and grow capacity as needed. This is one reason why numerous startups choose all- public-cloud approaches.
techniques that support not only data privacy, but also the privacy of the accesses that users make on such data. This problem has been traditionally addressed by Private Information Retrieval (PIR) proposals (e.g., ), which provide protocols for querying a database that prevent the external server from inferring which data are being accessed. PIR solutions however have high computational complexity, and alternative approaches have been proposed. These novel approaches rely on the Oblivious RAM structure (e.g., [33,47, 48]) or on the definition of specific tree- based data structures combined with a dynamic allocation of the data (e.g., [29,30]). The goal is to support the access to a collection of encrypted data while preserving access and pattern confidentiality, meaning that an observer can infer neither what data are accessed nor whether two accesses aim to the same data. Besides protecting access and pattern confidentiality, it is also necessary to design mechanisms for protecting the integrity and authenticity of the computations, that is, to guarantee the correctness, completeness, and freshness of query results. Most of the techniques that can be adopted for verifying the integrity of query results operate on a single relation and are based on the idea of complementing the data with additional data structures (e.g., Merkle trees) or of introducing in the data collection fake tuples that can be efficiently checked to detect incorrect or incomplete results (e.g., [41,46,50– 52]). Interesting aspects that need further analysis are related to the design of efficient techniques able to verify the completeness and correctness of the results of complex queries (e.g., join operations among multiple relations, possibly stored and managed by different cloud servers with different levels of trust).
A common option for reducing the operating costs of only sporadically used IT infra- structure, such as in the case of the “warm standby” , is CloudComputing. As defined by NIST , CloudComputing provides the user with a simple, direct access to a pool of configurable, elastic computing resources (e.g. networks, servers, storage, applications, and other services, with a pay-per-use pricing model). More specifically, this means that resources can be quickly (de-)provisioned by the user with minimal provider interaction and are also billed on the basis of actual consumption. This pric- ing model makes CloudComputing a well-suited platform for hosting a replication site offering high availability at a reasonable price. Such a warm standby system with infrastructure resources (virtual machines, images, etc.) being located and updated in the Cloud is herein referred to as a “Cloud-Standby-System”. The relevance and po- tential of this cloud-based option for hosting replication systems gets even more ob- vious in the light of the current situation in the market. Only fifty percent of small and medium enterprises currently practice BCM with regard to their IT-services while downtime costs sum up to $12,500-23,000 per day for them .
The data usage charges in case of conventional models are fairly straightforward and are with respect to bandwidth and online space consumption. But in case of the cloud, the same does not hold good as the resources used is different at different point in time due to the scalable nature of the application. Hence, due to the pool of resources available, the cost analysis is a lot more complicated. The cost estimate is now in terms of the number of instantiated virtual machines rather than the physical server; that is, the instantiated VM has become the unit of cost. This resource pool and its usage vary from service model to service model. For SaaS cloud providers, the cost of developing scalability or multi-tenancy within their offering can be very substantial. These include alteration or redesign and development of a software under consideration which was initially developed for a conventional model, perfor- mance and security enhancement for concurrent user access (similar to synchronisa- tion and read and write problem) and dealing with complexities induced by the above changes. On the other hand, SaaS providers need to consider the trade-off between the provision of multi-tenancy and the cost savings yielded by multi- tenancy such as reduced overhead through amortisation and reduced number of on-site software licences. Therefore, the charging model must be tailored strategi- cally for SaaS provider in order to increase profi tability and sustainability of SaaS cloud providers [ 7 ].
Internet services are the most popular applications with lots of users. Websites such as Facebook, Yahoo, and Google are accessed by millions every day, as a result of which a huge volume of valuable data (in terabytes) is generated, which can be used to improve online strategies of advertising and user fulfillment. Storage, real-time capture, and analy- sis of that data are general needs of all applications. To trace these problems, some cloudcomputing strategies have recently been implemented. Cloudcomputing is a style of com- puting where virtualized resources are provided to the customers as a service, which is dynamically scalable, over the Internet. The cloud refers to the data center hardware and software that a client requests from remotely hosted applications, often in the form of data stores. Those companies are using these infrastructures to cut costs by eliminating the call for physical hardware, which allows them to outsource data and on-demand com- putations. The function of large-scale computer data centers is the main focus of cloudcomputing. These data centers benefit from the economies of scale, allowing for decrease in the cost of bandwidth, operations, electricity, and hardware .
The Heartbeat Service periodically collects the dynamic performance information about the node and publishes this information to the membership service in the Aneka Cloud. These data are collected by the index node of the Cloud, which makes them available for services such as reserva- tions and scheduling in order to optimize the use of a heterogeneous infrastructure. As already dis- cussed, basic information about memory, disk space, CPU, and operating system is collected. Moreover, additional data are pulled into the “alive” message, such as information about the installed software in the system and any other useful information. More precisely, the infrastructure has been designed to carry over any type of data that can be expressed by means of text-valued properties. As previously noted, the information published by the Heartbeat Service is mostly con- cerned with the properties of the node. A specific component, called Node Resolver, is in charge of collecting these data and making them available to the Heartbeat Service. Aneka provides different implementations for such component in order to cover a wide variety of hosting environments. A variety of operating systems are supported with different implementations of the PAL, and differ- ent node resolvers allow Aneka to capture other types of data that do not strictly depend on the hosting operating system. For example, the retrieval of the public IP of the node is different in the case of physical machines or virtual instances hosted in the infrastructure of an IaaS provider such as EC2 or GoGrid. In virtual deployment, a different node resolver is used so that all other compo- nents of the system can work transparently.
“Clouds are about ecosystems, about large collections of interacting services including partners and third parties, about inter-cloud communication and sharing of information through such semantic frameworks as social graphs.” Transformationvsutility This, he adds, is clearly business transformational, whereas “computing services that are delivered as a utility from a remote data centre” are not. The pioneers in VANS/EDI methods – which are now migrating into modern cloud systems in offerings from software ﬁrm SAP and its partners, for example – were able to set up basic trading data exchange networks, but the cloud transformation now is integrating, in real-time, the procurement, catalogue, invoicing and other systems across possibly overlapping and much wider business communities.
In a “Cloud Migration: A Case Study of Migrating an Enterprise IT System to IaaS”, Khajeh-Hosseini et al. (2010a) talked about the third confederate obtuse infra dig. According to them if the third team up hard anchor is introduced in good shape it largess dissimilar opportunities for enterprises to ahead of time the regulation of aver of affairs and outlay for both undeveloped staff and customers. It excluding helps the recess of cash-flow oversight for invest in overstuff as the Uninspiring pricing incise has denude uncomplicated entrust and weekly billing and it including lessens the volatility of expenditure on electricity. These are the benefits comparing to the in-house Facts center, as it heart be precious to earn components and cash-flow underpinning beyond be slow and difficult wean away from clients. Spread about mosey act cut corners backbone excluding go down as you are watchword a long way on the shtick your own figures center and third party Slow-witted will be responsible for focus. The Inured theme is as well as howl roundabout yielding for the finance shoot of the joining to reduce the chief executive officer burden. Third party Dumb infrastructure solutions in trouble surrounding extremist pricing models, which shoved in managing income for customers, sales and marketing staff (Khajeh-Hosseini et al., 2010).
This technique provides an approach that the document itself can reserve its privacy and security even when being exchanged on unsecured networks. Some security components like storage, access, and usage control – that the companies may deploy an information system to be responsible for – are encapsulated (encapsulation object-oriented concept) within the document to ensure autonomic document architecture for Enterprise Digital Right Management (E-DRM). This can’t only be applied for files that can be exchanged through uncontrollable network like cloudcomputing systems, but also can be applied for the USB flash drivers.
Operating a web site that requires database access, supports considerable traffic, and possibly connects to enterprise systems requires complete control of one or more servers, to guarantee responsiveness to user requests. Servers supporting the web site must be hosted in a data center with access from the public Internet. Traditionally, this has been achieved by renting space for physical servers in a hosting center operated by a network provider far from the enterprise’s inter- nal systems. With cloudcomputing, this can now be done by renting a virtual machine in a cloud hosting center. The web site can make use of open source software, such as Apache HTTP Server, MySQL, and PHP; the so-called LAMP stack; or a Java™ stack, all of which is readily available. Alternatively, enterprises might prefer to use commercially supported software, such as Web- Sphere ® Application Server and DB2 ® , on either Linux ® or Windows operating systems. All
Starting in 1958 the agency, then known as ARPA, was responsible for carrying out research and development on projects at the cutting edge of science and technology. While these typically dealt with national security–related matters, the agency never felt bound by military projects alone. One outcome of this view was significant work on general information technology and computer systems, starting with pioneering research on what was called time-sharing. The first computers worked on a one user–one system principle, but because individuals use computers intermittently, this wasted resources. Research on batch processing helped to make computers more efficient because it permitted jobs to queue up over time and thereby shrunk nonusage time. Time-sharing expanded this by enabling multiple users to work on the same system at the same time. DARPA kick-started time-sharing with a grant to fund an MIT-based project that, under the leadership of J. C. R. Licklider, brought together people from Bell Labs, General Electric, and MIT (Waldrop 2002). With time-sharing was born the principle of one system serving multiple users, one of the foundations of cloudcomputing. The thirty or so companies that sold access to time-sharing computers, including such big names as IBM and General Electric, thrived in the 1960s and 1970s. The primary operating system for time-sharing was Multics (for Multiplexed Information and Computing Service), which was designed to operate as a computer utility modeled after telephone and electrical utilities. Specifically, hardware and software were organized in modules so that the system could grow by adding more of each required resource, such as core memory and disk storage. This model for what we now call scalability would return in a far more sophisticated form with the birth of the cloud- computing concept in the 1990s, and then with the arrival of cloud systems in the next decade. One of the key similarities, albeit at a more primitive level, between time-sharing systems and cloudcomputing is that they both offer complete operating environments to users. Time-sharing systems typically included several programming-language processors, software packages, bulk printing, and storage for files on- and offline. Users typically rented terminals and paid fees for connect time, for CPU (central processing unit) time, and for disk storage. The growth of the microprocessor and then the personal computer led to the end of time-sharing as a profitable business because these devices increasingly substituted, far more conveniently, for the work performed by companies that sold access to mainframe computers.
When working at scale, as you are likely to do with a private cloud implementation, strongly consider standardization of your server hardware models and purchasing groups of serv- ers together. Not only does this approach guarantee you’ll have compatible CPU generations and identical hardware, it makes your deployment process simpler. You can use tools like Autodeploy and host profi les to deploy and redeploy your servers. Likewise, using DHCP rather than static IP addressing schemes for vSphere servers becomes more appealing. vSphere 5.1 with Autodeploy also allows you to deploy stateless vSphere hosts, where each node is booted from the network using a Trivial File Transfer Protocol (TFTP) server. The host downloads the vSphere hypervisor at boot-time and runs it in RAM; then it downloads its confi guration from the Autodeploy server.
The resource allocation in cloud environment is an important and challenging research topic. Verma et al.  formulate the problem of dynamic placement of applications in virtualized heterogeneous systems as a continuous optimization: The placement of VMs at each time frame is optimized to minimize resource consumption under certain perfor- mance requirements. Chaisiri et al.  study the trade-off between the advance reservation and the on-demand resource allocation, and propose a VM placement algorithm based on stochastic integer programming. The proposed algorithm minimizes the total cost of resource provision in infrastructure as a service (IaaS) cloud. Wang et al.  present a virtual appliance-based automatic resource provisioning framework for large vir- tualized data centers. Their framework can dynamically allocate resources to applications by adding or removing VMs on physical servers. Verma et al. , Chaisiri et al. , and Wang et al.  study cloud resource allocation from VM placement perspective. Bacigalupo et al.  quanti- tatively compare the effectiveness of different techniques on response time prediction. They study different cloud services with different priorities, including urgent cloud services that demand cloud resource at short notice and dynamic enterprise systems that need to adapt to frequent changes in the workload. Based on these cloud services, the layered queuing network and historical performance model are quantitatively compared in terms of prediction accuracy. Song et al.  present a resource allocation approach according to application priorities in multiapplication virtualized cluster. This approach requires machine learning to obtain the utility functions for applications and defines the application priorities in advance. Lin and Qi  develop a self-organizing model to manage cloud resources in the absence of centralized management control. Nan et al.  present opti- mal cloud resource allocation in priority service scheme to minimize the resource cost. Appleby et al.  present a prototype of infrastructure, which can dynamically allocate cloud resources for an e-business comput- ing utility. Xu et al.  propose a two-level resource management system with local controllers at the VM level and a global controller at the server level. However, they focus only on resource allocation among VMs within a cloud server [19,20].
Services such as Gmail, Google Drive, Google Calendar, Picasa, and Google Groups are free of charge for individual users and available for a fee for organizations. These services are running on a cloud and can be invoked from a broad spectrum of devices, including mobile ones such as iPhones, iPads, Black-Berrys, and laptops and tablets. The data for these services is stored in data centers on the cloud. The Gmail service hosts emails on Google servers and, provides a Web interface to access them and tools for migrating from Lotus Notes and Microsoft Exchange. Google Docs is Web-based software for building text documents, spreadsheets, and presentations. It supports features such as tables, bullet points, basic fonts, and text size; it allows multiple users to edit and update the same document and view the history of document changes; and it provides a spell checker. The service allows users to import and export files in several formats, including Microsoft Office, PDF, text, and OpenOffice extensions.