first of its own to the best of our knowledge. This architecture consists of four major entities, namely Service developer, host, governor, and requester. Service developer and host are clearly detached from each other to not only facilitate and encourage Service hosting by unskilled mobile users, but also to increase privacy of the Service requester (consumers). A cloud of mobile devices including smartphones, Tablets, and sundry mobile devices is created where devices with heterogeneous platforms, hardware, and manufacturers can co- exist and collaborate. We encourage Service development and hosting by providing monetary incentive for programmers and mobile owners to stimulate mobile Service hosting. Based on Service governor responsibilities we argue that mobile network operators are likely the best candidate to serve as the Service governor, because they are centralized, well-established, and reputed organizations that have been serving mobile users since long and could establish high degree of trust with users. Successful mobile Service hosting architecture can be utilized in various domains such as supply chain management in which various organizations (e.g. billing and transport) can collabo- rate to perform a business activity. A Service hosted on a driver’s mobile can notify customer orders and update delivery scheduling to the recipient. However, MOMCC architecture is more suitable for computing-intensive tasks since different hosts share their computational resources. Data-intensive tasks are less likely addressable in this architecture.
In the first chapters of this thesis we discovered that cloudcomputing is a technique which is already known for years. Due to the maturity of concepts such as Internet, Web 2.0, Service oriented architecture, virtualization, etc. this technique becomes a very useful way to provide IT services and decrease costs. Cloudcomputing provides customers with an IT solution that makes it possible to only pay for the resources that are used. The key characteristics found in literature show that it provides on-demand self-service, resource pooling, broad network access, rapid elastic and a measured service. As can be read in chapter two, cloudcomputing can be implemented in several ways. For the scope of this thesis we only highlighted the public cloud as a business solution. The reason for this choice was that Capgemini specifically needed information about security in the public cloud. Capgemini is interested in the public cloud because it provides the user with all the benefits the (cloudcomputing) business model is able to give. Some of these benefits are the decrease of cost on capital expenses (e.g. infrastructure, hardware), the decrease of management costs, elastic capacity, the decrease of time to market, etc. A side effect of all the benefits is that this deployment model provides the customer with the most complex model in cloudcomputing which influences security.
The past decades have witnessed the success of centralized comput- ing infrastructures in many application domains. Then, the emergence of the Internet brought numerous users of remote applications based on the technologies of distributed computing. Research in distributed computing gave birth to the development of grid computing. Though grid is based on distributed computing, the conceptual basis for grid is somewhat different. Computing with grid enabled researchers to do computationally intensive tasks by using limited infrastructure that was available with them and with the support of high processing power that could be provided by any third party, and thus allowing the researchers to use grid computing, which was one of the first attempts to provide computing resources to users on payment basis. This technology indeed became popular and is being used even now. An associated problem with grid technology was that it could only be used by a certain group of people and it was not open to the public. Cloud com- puting in simple terms is further extension and variation of grid computing, in which a market-oriented aspect is added. Though there are several other important technical differences, this is one of the major differences between grid and cloud. Thus came cloudcomputing, which is now being used as a public utility computing software and is accessible by almost every person through the Internet. Apart from this, there are several other properties that make cloud popular and unique. In cloud, the resources are metered, and a user pays according to the usage. Cloud can also support a continuously varying user demands without affecting the performance, and it is always available for use without any restrictions. The users can access cloud from any device, thus reaching a wider range of people.
A common option for reducing the operating costs of only sporadically used IT infra- structure, such as in the case of the “warm standby” , is CloudComputing. As defined by NIST , CloudComputing provides the user with a simple, direct access to a pool of configurable, elastic computing resources (e.g. networks, servers, storage, applications, and other services, with a pay-per-use pricing model). More specifically, this means that resources can be quickly (de-)provisioned by the user with minimal provider interaction and are also billed on the basis of actual consumption. This pric- ing model makes CloudComputing a well-suited platform for hosting a replication site offering high availability at a reasonable price. Such a warm standby system with infrastructure resources (virtual machines, images, etc.) being located and updated in the Cloud is herein referred to as a “Cloud-Standby-System”. The relevance and po- tential of this cloud-based option for hosting replication systems gets even more ob- vious in the light of the current situation in the market. Only fifty percent of small and medium enterprises currently practice BCM with regard to their IT-services while downtime costs sum up to $12,500-23,000 per day for them .
This book comprehensively debates on the emergence of mobile cloudcomputing from cloudcomputing models. Various technological and architectural advancements in mobile and cloudcomputing have been reported. It has meticulously explored the design and architecture of computational offloading solutions in cloud and mobile cloudcomputing domains to enrich mobile user experience. Furthermore, to optimize mobile power consumption, existing solutions and policies toward green mobile computing, green cloudcomputing, green mobile networking, and green mobile cloudcomputing are briefly discussed. The book also presents numerous cloud and mobile resource allo- cation and management schemes to efficiently manage existing resources (hardware and software). Recently, integrated networks (e.g., WSN, VANET, MANET) have sig- nificantly helped mobile users to enjoy a suite of services. The book discusses existing architecture, opportunities, and challenges, while integrating mobile cloud comput- ing with existing network technologies such as sensor and vehicular networks. It also briefly expounds on various security and privacy concerns, such as application security, authentication security, data security, and intrusion detection, in the mobile cloud com- puting domain. The business aspects of mobile cloudcomputing models in terms of resource pricing models, cooperation models, and revenue sharing among cloud pro- viders are also presented in the book. To highlight the standings of mobile cloud comput- ing, various well-known, real-world applications supported by mobile cloudcomputing models are discussed. For example, the demands and issues while deploying resource- intensive applications, including face recognition, route tracking, traffic management, and mobile learning, are discussed. This book concludes with various future research directions in the mobile cloudcomputing domain to improve the strength of mobile cloudcomputing and to enrich mobile user experience.
Handling the data resources on cloud is difficult due to some problems such as low bandwidth, mobility and limitation of resource capacity of mobile devices. One easy solution to improve the efficiency of data access is a local storage cache . For example  addresses three issues: maintaining seamless communication among subscribers and cloud, handling cache consistency and supporting data privacy. Proposed scheme has two main functional blocks namely RFS client on the mobile device and RFS server on cloud. In proposed scheme authors using RESTful web service  for service provider and HTTP for communication protocol. Also it addresses issues such as wireless connectivity and data privacy.
In other cases, the loss of control of where your virtual IT infrastructure resides could open the way to other problematic situations. More precisely, the geographical location of a datacenter gen- erally determines the regulations that are applied to management of digital information. As a result, according to the specific location of data, some sensitive information can be made accessible to government agencies or even considered outside the law if processed with specific cryptographic techniques. For example, the USA PATRIOT Act 5 provides its government and other agencies with virtually limitless powers to access information, including that belonging to any company that stores information in the U.S. territory. Finally, existing enterprises that have large computing infra- structures or large installed bases of software do not simply want to switch to public clouds, but they use the existing IT resources and optimize their revenue. All these aspects make the use of a public computing infrastructure not always possible. Yet the general idea supported by the cloudcomputing vision can still be attractive. More specifically, having an infrastructure able to deliver IT services on demand can still be a winning solution, even when implemented within the private premises of an institution. This idea led to the diffusion of private clouds, which are similar to pub- lic clouds, but their resource-provisioning model is limited within the boundaries of an organization.
There’s growing sentiment among many cloud experts that ultimately hybrid adoption will be most ad- vantageous for many organizations. Warrilow says “for some time Gartner has advised that hybrid is the most likely scenario for most organiza- tions.” Staten agrees with the notion for two reasons. First, “some appli- cations and data sets simply aren’t a good fit with the cloud,” he says. This might be due to application architec- ture, degree of business risk (real or perceived), and cost, he says. Second, rather than making a cloud-or-no- cloud decision, “it’s more practical and effective to leverage the cloud for what makes the most sense and other deployment options where they make the most sense,” he says. In terms of strategy, Staten recommends regularly analyzing deployment decisions. “As cloud services mature, their applica- bility increases,” he says.
Case in point? Six in 10 channel firms say that cloud has generally strengthened their customer relationships, with just 15% claiming it has weakened them and roughly a quarter that said that their client bonds have remained the same. This is encouraging news given the fact that many in the channel have feared publicly that cloud would drive a wedge between them and their customers. There’s been rampant apprehension about such ill effects as a resurgence in vendor direct sales and end user customers choosing a self-‐service model for their IT solutions, i.e. procuring SaaS applications over the Internet. And while both of these trends are happening to a certain extent, CompTIA data suggest not at such dire expense to most of the channel, especially those that have reached a high level of cloud maturity today and intend to remain committed. That said, not all channel firms that adopt cloud will engender more good will with customers; some may simply have a customer set that is not cloud-‐ friendly, others may not gain sufficient expertise to provide value, etc.
Hadoop MapReduce and the LexisNexis HPCC platform are both scalable archi- tectures directed towards data-intensive computing solutions. Each of these system platforms has strengths and weaknesses and their overall effectiveness for any appli- cation problem or domain is subjective in nature and can only be determined through careful evaluation of application requirements versus the capabilities of the solution. Hadoop is an open source platform which increases its ﬂexibility and adaptability to many problem domains since new capabilities can be readily added by users adopt- ing this technology. However, as with other open source platforms, reliability and support can become issues when many different users are contributing new code and changes to the system. Hadoop has found favor with many large Web-oriented companies including Yahoo!, Facebook, and others where data-intensive computing capabilities are critical to the success of their business. Amazon has implemented new cloudcomputing services using Hadoop as part of its EC2 called Amazon Elastic MapReduce. A company called Cloudera was recently formed to provide training, support and consulting services to the Hadoop user community and to pro- vide packaged and tested releases which can be used in the Amazon environment. Although many different application tools have been built on top of the Hadoop platform like Pig, HBase, Hive, etc., these tools tend not to be well-integrated offer- ing different command shells, languages, and operating characteristics that make it more difﬁcult to combine capabilities in an effective manner.
retail for 1 lakh, or about $2,500, in the Indian market. It is a fully functional, four-door vehicle that aims to replace motorbikes. Small Brazilian company Obvio is set to release its ﬁrst hybrid car, the 828, at a price of $14,000, less than half the cost of a Toyota Prius. By radically redesigning the traditional automotive production and distribution processes, Tata is positioned to grow sales of cars across the develop- ing world and perhaps ultimately in low-end segments of the devel- oped world. This is an example of disruptive innovation where competitors will have to create entirely new processes to produce a similar product, and it will not be easy to make a proﬁt at this price by stripping down a conventional car. Tata Motors is targeting not only scooter and motorcycle buyers who pay around $1,250 today but also customers who need cars for commuting in busy trafﬁc. Tata Motors introduced new automobile design and production processes to create a new market for the Nano. The company is also planning an innovative distribution model of shipping semiﬁnished parts to rural entrepreneurs who can assemble and service these cars and customize them to suit customer needs. This innovative and disrup- tive model of distribution and being able to make to order on demand would help facilitate greater customization in the auto industry.
SDN has two main advantages over traditional networks in regards to detection and response to attacks: (1) the (logically) centralized management model of SDN allows administrators to quickly isolate or block attack traffic patterns without the need to access and reconfigure several heterogeneous hardware (switches, routers, firewalls, and intrusion detection systems); (2) detection of attacks can be made a distributed task among switches (SDN controllers can define rules on switches to generate events when flows considered malicious are detected), rather than depending on expensive intrusion detection systems. SDN can also be used to control how traffic is directed to network monitoring devices (e.g., intrusion detection systems) as proposed in . Quick response is particularly important in highly dynamic cloud environments. Traditional intrusion detection systems (IDS) mainly focus on detecting suspicious activities and are limited to simple actions such as disabling a switch port or notifying (sending email) to a system administrator. SDN opens the possibility of taking complex actions such as changing the path of suspicious activities in order to isolate them from known trusted communication. Research will focus on how to recast existing IDS mechanisms and algorithms in SDN contexts, and development of new algorithms to take full advantage of multiple points of action. For example, as each switch can be used to detect and act on attacks,  has shown the improvement of different traffic anomaly detection algorithms (Threshold Random Walk with Credit Based rate limiting, Maximum Entropy, network traffic anomaly detection based on packet bytes, and rate limiting) using Openflow and NOX by placing detectors closer to the edge of the network (home or small business networks instead of the ISP) while maintaining the line rate performance.
If you look at the service stack from the top down, you can see some of the value the other layers provide. At the very top are business services such as Dunn and Bradstreet, which provides analysis and insight into companies that you might potentially do business with. Other examples of business services are credit reporting and banking. Providing business services such as these requires data stores for storing data. However, a relational database by itself is not sufficient: The data retrieval and storage methods must be integrated into programs that can provide user inter- faces for people can use. Relational databases also need to be maintained by database administra- tors who archive and back up data. This is where Platform as a Service comes in. Platform as a Service provides all the services that enable systems to run by themselves, including scaling, failover, performance tuning, and data retrieval. For example, the Salesforce Force.com platform provides a data store where your programs can store and retrieve data without you ever needing to worry about database or system administration tasks. It also provides a web site with graphical tools for defining and customizing data objects. IBM Workload Deployer is another Platform as a Service that runs on an infrastructure as a service cloud but is aware of the different software run- ning on individual virtual machines; it can perform functions such as elastic scaling of applica- tion server clusters.
In  authors explain about how cloudcomputing characteristics affect the enterprise architecture from four aspects including business, data, application and technology. The main focus of this article is on limited view of current enterprise architecture that just contains the enterprise. How-ever, by adopting the cloudcomputing, the cloud provider must be considered. Although an important challenge is studied in this paper, but so many problems are not investigated. Authors of  first study the issue of cloud and enterprise architecture adaptation and then the opportunities and the challenges of this architecture are discussed from the perspective of technology governance, integration, and security and information technology. Through this way we can have a better imagination about current situation of business and desired situation and with the existence of information system architecture, we can recognize that which application of service oriented architecture is able to integrate with the cloud under specific
“Clouds are about ecosystems, about large collections of interacting services including partners and third parties, about inter-cloud communication and sharing of information through such semantic frameworks as social graphs.” Transformationvsutility This, he adds, is clearly business transformational, whereas “computing services that are delivered as a utility from a remote data centre” are not. The pioneers in VANS/EDI methods – which are now migrating into modern cloud systems in offerings from software ﬁrm SAP and its partners, for example – were able to set up basic trading data exchange networks, but the cloud transformation now is integrating, in real-time, the procurement, catalogue, invoicing and other systems across possibly overlapping and much wider business communities.
Abstract. Cloudcomputing technology has become familiar to most Internet users. Subsequently, there has been an increased growth in the use of cloudcomputing, including Infrastructure as a Service (IaaS). To ensure that IaaS can easily meet the growing demand, IaaS providers usually increase the capacity of their facilities in a vertical IaaS increase capability and the capacity for local IaaS amenities such as increasing the number of servers, storage and network bandwidth. However, at the same time, horizontal scalability is sometimes not enough and requires additional strategies to ensure that the large number of IaaS service requests can be met. Therefore, strategies requiring horizontal scalability are more complex than the vertical scalability strategies because they involve the interaction of more than one facility at different service centers. To reduce the complexity of the implementation of the horizontal scalability of the IaaS infrastructures, the use of a technology service oriented infrastructure is recommended to ensure that the interaction between two or more different service centers can be done more simply and easily even though it is likely to involve a wide range of communication technologies and different cloudcomputing management. This is because the service oriented infrastructure acts as a middle man that translates and processes interactions and protocols of different cloudcomputing infrastructures without the modification of the complex to ensure horizontal scalability can be run easily and smoothly. This paper presents the potential of using a service-oriented infrastructure framework to enable transparent vertical scalability of cloudcomputing infrastructures by adapting three projects in this research: SLA@SOI consortium, Open CloudComputing Interface (OCCI), and OpenStack.
At present there are few published materials on vCloud Director outside of offi cial VMware documentation, but the virtualization community has a long tradition of dedicated and passion- ate bloggers, speakers, and contributors producing timely content in easily digestible chunks. Writing a book on a new product like vCloud Director has been something of a moving target. Seeking to capitalize on the emerging cloudcomputingmarket. VMware has maintained an aggressive release cadence for the vCloud Director product, which is now in its second major release in three years, and we encourage the reader to use this book in conjunction with these online materials to dive deep where required. Although the core concepts and architecture will remain broadly consistent across future releases, these online resources will prove invaluable in keeping abreast of new functionality, issues, and features. This book points you to the best of them, but the best way to stay informed of breaking news in the virtualization world is to fol- low the VMware Planet v12n RSS feed (www.vmware.com/vmtn/planet/v12n/). For those of you familiar with social media tools like Twitter, the virtualization community is also active there on a daily basis.
Lean agile development methodologies and the cloud model complement each other very well. Cloud services take pride in meeting user requirements rapidly, delivering applications whenever and to whatever extent they are needed. Agile methods give high credence to user collaboration in requirements discovery. The lean agile system of software development aims to break down project require- ments into small and achievable segments. This approach guarantees user feedback on every task of the project. Segments can be planned, developed, and tested individually to maintain high-quality standards without any major bottlenecks. The development stage of every component thus becomes a single “iteration” process. Moreover, lean agile software methods place huge emphasis on developing a collaborative relationship between application developers and end users. The entire development process is transparent to the end user and feedback is sought at all stages of development, and the needy changes are made accordingly then and there. Using lean agile development in conjunction with the cloud paradigm provides a highly interactive and collaborative environment. The moment developers fi nalize a feature, they can push it as a cloud service; users can review it instantly and provide valuable feedback. Thus, a lengthy feedback cycle can be eliminated thereby reducing the probability of misstated or misunderstood requirements. This considerably curtails the time and efforts for the software development organization while increasing end-user satisfaction. Following the lean agile approach of demand- driven production, end users ’ needs are integrated in a more cohesive and effi cient manner with software delivery as cloud services. This approach stimulates and sustains a good amount of innovation, requirement discovery, and validation in cloudcomputing.
In addition to these concerns, there is the issue of data preservation. Absent some form of regulation or mutual agreement within the IT industry, and specifically among those who are major cloud-services providers, there is no requirement to preserve the photos, email, videos, postings, data, and flies that individuals and organizations believe are securely stored in data centers around the world. As a result, much of the digital evidence from the daily lives of individuals and the decisions and activities of organizations will vaporize, irrespective of how many cloud data centers fill the world. As one concerned tech writer argued, “We’re really good at making things faster, smaller, and cheaper. And every step along the way makes for great headlines. But we’re not nearly so good at migrating our digital stuff from one generation of tech to the next. And we’re horrible at coming up with business models that assure its longevity and continuity” (Udell 2012). Another person who has been active in the online world for years, hosting numerous sites and archives, worried, “Not to be dramatic or anything, but no more than forty days after I die, and probably much sooner, all the content I am hosting will disappear” (Winer, quoted in ibid.). To date, the only reason most of this material has been preserved is due to the heroic efforts of individuals who personally port archives when technology and standards change. Referring to several archives dating from the turn of this century, Udell commented in a Wired column, “If I hadn’t migrated them, they’d already be gone. Not because somebody died, it’s just that businesses turned over or lost interest and the bits fell off the web. Getting published, it turns out, is a lousy way to stay published. With all due respect to wired.com, I’ll be amazed if this column survives to 2022 without my intervention” (ibid.). There are some efforts, primarily by governments, to archive and preserve flies. The most notable of these may be at the U.S. Library of Congress, which, among other things, is archiving the massive database of Twitter postings. These are all important activities, but they are isolated and much more data disappears than is preserved. Of course, one can argue, there is a great deal of digital content that is not worth paying to preserve. Society has survived in the past without carrying forward from generation to generation the entire weight of the historical record. Nevertheless, since most of that record is now digital, is it not worthwhile to develop strategies to preserve at least some of it in a systematic fashion?