When the customer has been registered successfully and security restrictions are specified, the system is ready to be utilized by the customer. The customer’s request to allocate resources shall be sent to the resource allocation manager unit (RAMU). Then, the RAMU will request the security restrictions from the control Database and the current resource allocation map from the resource allocation map Database in order to allocate the customer’s resources in the proper location. And whenever a customer releases a resource the resource allocation map Database is updated immediately. In such a setup, an attacker will not have the chance to take advantage of the resource allocation mechanism, and the benefits of Multi-Tenancy are preserved. An extra benefit of this system is that the control Database can have a different resource allocation method, whereby the system model will not be changed. This advantage will give the Cloud provider the opportunity to define security restrictions based on their business strategy. Also, it gives the provider the chance to implement their own resource allocation methods if needed. Moreover, it could be a security best practice to change the resource allocation method periodically in order to raise the system difficulty and make it hard to be predicted.
Cloudcomputing has arrived as a solution to reduce costs in organizations and at the same time offer on-demand resources and computation without requiring to create an IT infrastructure. Services, such as Amazon Web Services (AWS) or Microsoft Azure provide a means for organizations to instantly provision and de-provision virtual machines (VM) depending on their needs, just paying for what they use. In order to make the necessary environment, cloud service providers (CSP) make use of virtualization technologies to maximize the value of their systems. Servers have always needed to run alone in physical machines to avoid other services to interfere with them; but the downside of this was the waste of resources. Virtualization enables the use of all the resources in a physical host by sharing them between the guest operating systems (OS). Many organizations have already deployed private clouds on their own infrastructures or through third parties. However, Public Clouds provide an additional advantage that makes it extremely attractive, cost savings. The resources for a cloud consumer seem to be unlimited by sharing all the host machines between different organizations. At the same time, the CSPs can easily maximize the use of each physical machine. Multi-tenancy is the name that receives this computational model. However, there is a drawback on multi-tenancy and public clouds. Host systems are shared between multiple tenants with different owners and one of them could potentially be a malicious attacker or even a competitor. Now someone trying to compromise an organization‟s business processes or data will not need to break through their traditional lines of defense. The traditional perimeter in their networks no longer exists. Now an organization‟s systems coexist shoulder to shoulder with unknown tenants with potential malicious intentions. The virtualization layer adds a new attack surface to be compromised where the hypervisor and the resident VMs can be the target. The alarms have been triggered, stopping many organizations on their path to the Cloud. This research paper aims to provide an overview of the security issues that this new computational model arises. The problem will be aboard from the general cloudcomputing term, through multi-tenancy, down to virtualization. The main goal is to explore and analyze the different threats that virtualization and multi-tenancy combined bring to the Cloud. More specifically, the venues to compromise a VM or a hypervisor in a physical machine will be analyzed and recommendations will be given on how to mitigate the risks.
ABSTRACT: CloudComputing is the most trending Information Technology computational model. This environment is enabled with an Internet to provide computing resources comprised of software, servers, Storages and applications that can be accessed by any type of client. Cloudcomputing is the fundamental model to provide the services like Infrastructure as a Service, Platform as a Service and Software as a Service. Majority of these services are offered based on pay per use lease style investment with very low or no startup costs to purchase all hardware or software components. The feature provides economic beneﬁts to both users and service providers since it reduces the management cost and thus lowers the subscription price. Many users are, however, reluctant to subscribe to cloudcomputing services due to security concerns. To enable deployment of cloudcomputing, we need to advance new techniques like secure multi-tenancy, resource isolation need to be advanced further.
5, Transmission of data: Most of time data is transferring between consumer and cloud. Initially data is sent from client site to cloud, data is returned from cloud to client after queries. Encryption is used provide protection while the transmission of data. Most of the time data is transferred without encryption due to lot of time is required for encryption and decryption for each operation upon data. During transfer an attacker can trace the communication, interrupt the data transfer, miss use of data, etc. Homomorphic algorithm allows processing data in an encrypted form, even though there is a chance of data transfer interruption, change the data transfer, other issues. 6, Data Breaches: As mentioned above, cloud environment is shared by multiple users and organizations of various part of the world; their valuable data is stored in one place. Any break or problem on cloud may expose these sensitive data to the users of other organizations sharing same storage. Because of multi-tenancy, customers using different applications on virtual machines could share same database and any corruption event that happens to it is going to affect others sharing the same database . In , it was reported”2011 Data Breach Investigations Report” that hacking and malware are the common causes of data breaches, with 50% hacking and 49% malware. 7, Availability: Availability of cloudcomputing system for all time from anywhere is very important for the success cloudcomputing. Most of the IT solutions require services on all time due to critical services they provide, any interruption in service may cause loss, loss of consumer confidence. Attacks like denial-of-service are typically used to deny availability of data. If an attacker uses all available resources, others cannot use those resources, which leads to denial of service and could slow accessing those resources . Also, customers, who are using cloud service and affected by botnet, could work to affect availability of other providers. Two strategies, say hardening and redundancy, are mainly used to enhance the availability of the cloud system or applications hosted on cloud .
Cloudcomputing describes a broad movement toward the use of wide area networks (WANs), such as the Internet, to enable interaction between information technology (IT) service providers of many types and consumers. Service providers are expanding their offerings to include the entire traditional IT stack, ranging from foundational hardware and platforms to application components, software services (SaaS),and whole software applications.Service-Oriented Architecture (SOA) is a business-centric IT architectural style , which aims to use various computing services as basic building blocks to rapidly construct low-cost and high performance applications. It improves the reusability of developed services, which may come from different service providerswhen a new business process arises, also as a way of business collaboration between organizations.Business process is a collection of interrelated tasks or activities, which are designed todeliver a particular result or complete a business goal . A business process could bebroken down into several sub- processes mapping to activities of the overall process.
For both the grid computing and cloudcomputing paradigms, there is a common need to be able to define the methods through which consumers discover, request, and use resources provided by third-party central facilities, and also implement highly parallel and distributed computations that execute on these resources. Grids came into existence in the mid 90s to address execution of large scale computation problems on a network of resource-sharing commodity machines that would deliver the same computation power affordable only with expensive supercomputers and large dedicated clusters at that time. A grid could typically comprise of compute, storage and network resources from multiple geographically distributed organizations, and these resources are normally considered to be heterogeneous with dynamic availability and capacity. The two primary concerns for grid were interoperability and security, as resources come from different administrative domains with varying global and local resource usage policies, as well as different hardware and software configurations and platforms. Most grids employ a batch- scheduled compute model with suitable policies in place to enforce the identification of proper user credentials under which the batch jobs will be run for accounting (e.g., the number of processors needed, duration of allocation, etc) and security purposes. Condor  is a centralized workload management system suited for computation-intensive jobs executed in local closed Grid environments. Its resource management mechanism is similar to that of UNIX (discretionary access control), with some additional modes of access besides the traditional read and write permissions. Legion  uses an object-oriented approach wherein all files, services and devices are considered as objects, and are accessed through functions of these objects. Each object can define its own access control policy, typically done using access control list and authentication mechanisms, in a default MayI function that is invoked before any other functions of the object may be called. The Globus Grid Toolkit (GT)  proposes mechanisms to translate users’ grid identities into local identities (which can in turn be verified by the resource providers using appropriate local access control policies) and also allow users’ certificates be delegated across many different sites.
Thus providing Infrastructure as a Service essentially means that the cloud provider assembles the building blocks for providing these services, including the computing resources hardware, networking hardware and storage hardware. These resources are exposed to the consumers through a request management system which in turn is integrated with an automated provisioning layer. The cloud system also needs to meter and bill the customer on various chargeback models. The concept of virtualization enables the provider to leverage and pool resources in a multi-tenant model. Thus, the features provided by virtualization resource pooling, combined with modern clustering infrastructure, enable efficient use IT resources to provide high availability and scalability, increase agility, optimize utilization, and provide a multi-tenancy model.
The idea of multi-tenancy, or many tenants sharing resources, is fundamental to cloudcomputing. Service providers are able to build network infrastructures and data architectures that are computationally very efficient, highly scalable, and easily incremented to serve the many customers that share them. Multi-tenancy spans the layers at which services are provided. In IaaS, tenants share infrastructure resources like hardware, computation servers, and data storage devices. With SaaS, tenants are sourcing the same application (e.g., Salesforce.com), which means that data of multiple tenants is likely stored in the same database and may share the same tables. When it comes to security, the risks with multi-tenancy must be addressed at all layers. The next few sections examine how this can be accomplished for shared hardware and application infrastructure.
The main attributes of cloudcomputing are illustrated as follows : Multi-tenancy (shared resources): Cloudcomputing is based on a business model in which resources are shared (i.e., multiple users use the same resource) at the network level, host level and application level; Massive scalability: Cloudcomputing provides the ability to scale to tens of thousands of systems, as well as the ability to massively scale bandwidth and storage space; Elasticity: Users can rapidly increase and decrease their computing resources as needed; Pay as you used: Users to pay for only the resources they actually use and for only the time they require them; Self-provisioning of resources: Users’ self-provision resources, such as additional systems (processing capability, software, storage) and network resources.
Cyber crime’s effects are felt throughout the Internet, and cloudcomputing is an enticing target for many reasons. Providers such as Google, Microsoft, and Amazon have the existing infrastructure to deflect and survive cyber-attacks, but not every cloud has such capability. If a cyber-criminal can identify the provider whose vulnerabilities are the easiest to exploit, then this entity becomes a highly visible target. If not all cloud providers supply adequate security measures, then these clouds will become high-priority targets for cyber criminals. By their architecture’s inherent nature, clouds offer the opportunity for simultaneous attacks to numerous websites, and without proper security, hundreds of websites could be compromised through a single malicious activity. Cloudcomputing security includes a number of issues like multitenancy, data loss and leakage, easy accessibility of cloud, identity management, unsafe API’s, service level agreement inconsistencies, patch management, internal threats etc. . It is not easy to enforce all the security measures that meet the security needs of all the users, because different users may have different security demands based upon their objective of using the cloud services.
Cloudcomputing, to put it simply, means Internet computing. The Internet is commonly visualized as clouds; hence the term “cloudcomputing” for computation done through the internet. cloudcomputing can be considered a new computing paradigm that allows users to temporary utilize computing infrastructure over the network, supplied as a service by the
The internet is the interconnection of thousands of networks. The ARPANET began as a US Government experiment back in 1969. ARPA , the Department of Defence (DoD) advanced research project agency, initially linked researchers with a remote computer centers, allowing them to share hardware and software resources such as computer disk space, databases and computers. Later, this was shortened as ―Internet. Cloud concept generated from the internet. There are different type of cloud services: SaaS, PaaS, IaaS. The underlying concept dates back to 1960 when John McCarthy opined that "computation may someday be organized as a public utility"; indeed it shares characteristics with service bureaus which date back to the 1960s. The term cloud had already come into commercial use in the early 1990s to refer to large ATM networks. By the turn of the 21st century, the term "cloudcomputing" had started to appear, although most of the focus at this time was on Software as a service (SaaS). While these internet-based online services do provide huge amounts of storage space and customizable computing resources, this computing platform shift, however, is eliminating the responsibility of local machines for data maintenance at the same time. As a result, users are at the mercy of their cloud service providers for the availability and integrity of their data. Clod computing is computing paradigm, where large pool of system are connected in private or public networks, to provide dynamically scalable infrastructure for application data and file storage.
To simulate extreme dynamics in the environment, a weather system that presents threats to the agent has been developed. The input of a human controller contributes additional dynamics to the scenario. Agents must achieve their own defined purpose by keeping their subsystems in homeostasis. The multiple and dynamic goals include collision avoidance; exploration and sample gathering tasks; stored energy conservation and solar recharging; and internal temperature regulation. The CAEDA API has been developed and controls robotic entities. The simulator has been integrated with a MALDAC hosted on a private computingcloud, with a specialized CAEDA agent acting as the web service agent.
Mr. Nilesh R. Patil , Prof. Rajesh proposed the secure architecture for cloud which is going to map some cloud security issues that are authentication of user, confidentiality, privacy, access control and checking the integrity of data. For authentication of user system uses One Time Password (OTP), for data integrity check system uses modified SHA-2 hash function. This modified version of SHA-2 will provide better solution for PreImage attack and Collision attack and for encryption and decryption system uses standard Advanced Encryption Standards (AES) algorithm. The proposed cloud architecture is more efficient because it uses efficient hashing algorithm which maps the Preimage attack and Collision attack. Future work proposed in this paper will design hashing algorithm for Media files such as audio, Video, Images etc.
The resource allocation in cloud environment is an important and challenging research topic. Verma et al.  formulate the problem of dynamic placement of applications in virtualized heterogeneous systems as a continuous optimization: The placement of VMs at each time frame is optimized to minimize resource consumption under certain perfor- mance requirements. Chaisiri et al.  study the trade-off between the advance reservation and the on-demand resource allocation, and propose a VM placement algorithm based on stochastic integer programming. The proposed algorithm minimizes the total cost of resource provision in infrastructure as a service (IaaS) cloud. Wang et al.  present a virtual appliance-based automatic resource provisioning framework for large vir- tualized data centers. Their framework can dynamically allocate resources to applications by adding or removing VMs on physical servers. Verma et al. , Chaisiri et al. , and Wang et al.  study cloud resource allocation from VM placement perspective. Bacigalupo et al.  quanti- tatively compare the effectiveness of different techniques on response time prediction. They study different cloud services with different priorities, including urgent cloud services that demand cloud resource at short notice and dynamic enterprise systems that need to adapt to frequent changes in the workload. Based on these cloud services, the layered queuing network and historical performance model are quantitatively compared in terms of prediction accuracy. Song et al.  present a resource allocation approach according to application priorities in multiapplication virtualized cluster. This approach requires machine learning to obtain the utility functions for applications and defines the application priorities in advance. Lin and Qi  develop a self-organizing model to manage cloud resources in the absence of centralized management control. Nan et al.  present opti- mal cloud resource allocation in priority service scheme to minimize the resource cost. Appleby et al.  present a prototype of infrastructure, which can dynamically allocate cloud resources for an e-business comput- ing utility. Xu et al.  propose a two-level resource management system with local controllers at the VM level and a global controller at the server level. However, they focus only on resource allocation among VMs within a cloud server [19,20].
Pig is high-level dataﬂow-oriented language and execution environment origi- nally developed at Yahoo! ostensibly for the same reasons that Google developed the Sawzall language for its MapReduce implementation – to provide a speciﬁc language notation for data analysis applications and to improve programmer pro- ductivity and reduce development cycles when using the Hadoop MapReduce environment. Working out how to ﬁt many data analysis and processing applica- tions into the MapReduce paradigm can be a challenge, and often requires multiple MapReduce jobs (White, 2009). Pig programs are automatically translated into sequences of MapReduce programs if needed in the execution environment. In addi- tion Pig supports a much richer data model which supports multi-valued, nested data structures with tuples, bags, and maps. Pig supports a high-level of user customiza- tion including user-deﬁned special purpose functions and provides capabilities in the language for loading, storing, ﬁltering, grouping, de-duplication, ordering, sorting, aggregation, and joining operations on the data (Olston, Reed, Srivastava, Kumar, & Tomkins, 2008a). Pig is an imperative dataﬂow-oriented language (language statements deﬁne a dataﬂow for processing). An example program is shown in Fig. 5.7. Pig runs as a client-side application which translates Pig programs into MapReduce jobs and then runs them on an Hadoop cluster. Figure 5.8 shows how the program listed in Fig. 5.7 is translated into a sequence of MapReduce jobs. Pig compilation and execution stages include a parser, logical optimizer, MapReduce compiler, MapReduce optimizer, and the Hadoop Job Manager (Gates et al., 2009).
technology enables a vendor’s cloud software to automatically move data from a piece of hardware that goes bad or is pulled offline to a section of the system or hardware that is functioning or operational. Therefore, the client gets seam- less access to the data. Separate backup systems, with cloud disaster recov- ery strategies, provide another layer of dependability and reliability. Finally, cloudcomputing also promotes a green alternative to paper-intensive office functions. It is because it needs less computing hardware on premise, and all computing-related tasks take place remotely with minimal computing hard- ware requirement with the help of technological innovations such as virtual- ization and multitenancy. Another viewpoint on the green aspect is that cloudcomputing can reduce the environmental impact of building, shipping, hous- ing, and ultimately destroying (or recycling) computer equipment as no one is going to own many such systems in their premises and managing the offices with fewer computers that consume less energy comparatively. A consolidated set of points briefing the benefits of cloudcomputing can be as follows: 1. Achieve economies of scale: We can increase the volume output or pro-
Virtualisation technology appeared several years ago; it comes in many types, all focusing on control and usage schemes that emphasise efficiency. This efficiency is seen as a single terminal being able to run multiple machines or a single task run- ning over multiple computers via idle computing power. Adoption within data cen- tres and adoption by service providers is increasing rapidly and encompasses different proprietary virtualisation technologies. Again, the lack of standardisation poses a barrier to an open standards cloud that is interoperable with other clouds, and a broad array of computing and information resources is fundamentally imple- mentable. As the availability of requested resources by users poses a crucial param- eter for the adequacy of the service provided, one of the major deployments of the cloud application paradigm is the virtual data centres (VDC), utilised by service providers  by enabling a virtual infrastructure (Fig. 6.6) in a distributed manner in various remotely hosted locations worldwide to provide accessibility  and backup services and ensure reliability in case of a potential single site failure. In the case of resource saturation or resource dismissal, where a certain location-based resource cannot be accessed, the VDC claims the resource in order to enable avail- ability to potential requests/users. Additionally, these services with globally assigned operations require faster response time by distributing workload requests to multiple VCDs using certain scheduling and load-balancing methodologies. Therefore, as an optimal approach to resource availability, a k-rank model  can be applied in order to rank the requests and resources and create outsourcing ‘connectivity’ to potential request.
Services such as Gmail, Google Drive, Google Calendar, Picasa, and Google Groups are free of charge for individual users and available for a fee for organizations. These services are running on a cloud and can be invoked from a broad spectrum of devices, including mobile ones such as iPhones, iPads, Black-Berrys, and laptops and tablets. The data for these services is stored in data centers on the cloud. The Gmail service hosts emails on Google servers and, provides a Web interface to access them and tools for migrating from Lotus Notes and Microsoft Exchange. Google Docs is Web-based software for building text documents, spreadsheets, and presentations. It supports features such as tables, bullet points, basic fonts, and text size; it allows multiple users to edit and update the same document and view the history of document changes; and it provides a spell checker. The service allows users to import and export files in several formats, including Microsoft Office, PDF, text, and OpenOffice extensions.
Enterprises often don’t have the required expertise to build cloud-based solutions. The average medium-to-large company that has been in business for more than a few years typically has a collection of applications and services spanning multiple eras of application architecture from mainframe to client-server to commercial-off the-shelf and more. The majority of the skills internally are specialized around these different architectures. Often the system administrators and security experts have spent a lifetime working on physical hardware or on-premises virtualization. Cloud architectures are loosely coupled and stateless, which is not how most legacy applications have been built over the years. Many cloud initiatives require integrating with multiple cloud-based solutions from other vendors, partners, and customers. The methods used to test and deploy cloud-based solutions may be radically different and more agile than what companies are accustomed to in their legacy environments. Companies making a move to the cloud should realize that there is more to it than simply deploying or paying for software from a cloud vendor. There are significant changes from an architectural, business process, and people perspective. Often, the skills required to do it right do not exist within the enterprise.