We summarize mappings of the Polar Grid project's prototype infrastructure to CloudComputing requirements in Table 3. As before with FloodGrid, we need persistent, fault tolerant Web service hosting environments, which should be provided by the Cloud infrastructure providers rather than the Polar Grid developers. We likewise need to make standard SAR data sets available. The particular problem to note here is the size of the data: it is prohibitive (unlike FloodGrid) to move the SAR data for imageprocessing on some remote cluster, so we must instead keep the computing power near the data. On the other hand, the SAR files are data-file parallel and so are good candidates for Cloud runtime tools. We need to extend the runtime engine (Hadoop, et al) to manage the filters shown in Table 3. These filters are compiled binaries that must be run on a compatible operating system (that is, one with a specific version of the Linux kernel), so virtualization can greatly expand the amount of resources available to us, compared to conventional systems. Virtualization also is useful for ensuring that the filter images have exactly the right dependencies (particularly the correct version of the MCR).
Bednarz et al.  proposed Cloud Based Image Analysis and Processing Toolbox being carried out by CSIRO, is to run on the Australian National research Collaboration Tools and Resources (NeCTAR) cloud infrastructure and is designed to give access to biomedical imageprocessing and analysis services to Australian researchers via remotely accessible user interfaces. The toolbox is based on software packages and libraries developed over the last 10-15 years by CSIRO scientists and software engineers: (a) HCA-Vision: developed for automating the process of quantifying cell features in microscopy images; (b) MILXView: a 3D medical imaging analysis and visualization platform increasingly popular with researchers and medical specialists working with MRI, PET and other types of medical images and (c) X- TRACT: developed for advanced X-ray image analysis and Computed Tomography. By providing user-friendly access to cloudcomputing resources and new workflow-based interfaces, our solution will enable the researchers to carry out various challenging image analysis and reconstruction tasks that are currently impossible or impractical due to the limitations of the existing interfaces and the local computer hardware. Several case studies will be presented at the conference.
Pipe-and-Filter Style. The pipe-and-filter style is a variation of the previous style for expres- sing the activity of a software system as sequence of data transformations. Each component of the processing chain is called a filter, and the connection between one filter and the next is represented by a data stream. With respect to the batch sequential style, data is processed incrementally and each filter processes the data as soon as it is available on the input stream. As soon as one filter produces a consumable amount of data, the next filter can start its processing. Filters generally do not have state, know the identity of neither the previous nor the next filter, and they are connected with in-memory data structures such as first-in/first-out (FIFO) buffers or other structures. This par- ticular sequencing is called pipelining and introduces concurrency in the execution of the filters. A classic example of this architecture is the microprocessor pipeline, whereby multiple instructions are executed at the same time by completing a different phase of each of them. We can identify the phases of the instructions as the filters, whereas the data streams are represented by the registries that are shared within the processors. Another example are the Unix shell pipes (i.e., cat , file- name .j grep , pattern .j wc l), where the filters are the single shell programs composed together and the connections are their input and output streams that are chained together. Applications of this architecture can also be found in the compiler design (e.g., the lex/yacc model is based on a pipe of the following phases: scanning j parsing j semantic analysis j code generation), image and signal processing, and voice and video streaming.
Above all, this book emphasizes problem solving through cloudcomputing. At times you might face a simple problem and need to know only a simple trick. Other times you might be on the wrong track and need some background information to get oriented. Still other times, you might face a bigger problem and need direction and a plan. You will find all of these in this book. We provide a short description of the overall structure of a cloud here, to give the reader an intuitive feel for what a cloud is. Most readers will have some experience with virtualization. Using virtualization tools, you can create a virtual machine with the operating system install soft- ware, make your own customizations to the virtual machine, use it to do some work, save a snap- shot to a CD, and then shut down the virtual machine. An Infrastructure as a Service (IaaS) cloud takes this to another level and offers additional convenience and capability.
Sometimes it is even difﬁcult for users to provide runtime estimates. These sit- uations usually occur due to the nature of the service. Considering two examples of services, one which processes satellite images and another one which solves symbolic mathematical problems we can draw the following conclusions. In the ﬁrst case it is quite easy to determine runtime estimates from historical execution times as they depend on the image size and on the required operation. The second case is more complicated as mathematical problems are usually solved by services exposing a Computer Algebra System (CAS). CASs are speciﬁc applications which are focused on one or more mathematical ﬁelds and which offer several methods for solving the same problem. The choice on which method to choose depends on internal criteria which is unknown to the user. A simple example is given when con- sidering large integer (more than 60 digits) factorizations. These operations have strong implications in the ﬁeld of cryptography. In this case factorizing n does not depend on similar values as n-1 or n+1. Furthermore the factoring time is not linked to the times required to factor n-1 or n. It is therefore difﬁcult for users to estimate runtimes in these situations. Reﬁning as much as possible the notion of similarity between two tasks could be an answer to this problem but in some cases, such as the one previously presented this could require searching for identical past submissions. Code proﬁling works well on CPU intensive tasks but fails to cope with data intensive applications where it is hard to predict execution time before all the input data has been received. Statistical estimations of run times face similar problems as code proﬁling.
A cloud OS should provide the APIs that enable data and services interoper- ability across distributed cloud environments. Mature OSs provide a rich set of services to the applications so that each application does not have to invent important functions such as VM monitoring, scheduling, security, power management, and memory management. In addition, if APIs are built on open standards, it will help organizations avoid vendor lock-in and thereby creating a more flexible environment. For example, linkages will be required to bridge traditional DCs and public or private cloud environments. The flex- ibility of movement of data or information across these systems demands the OS to provide a secure and consistent foundation to reap the real advan- tages offered by the cloudcomputing environments. Also, the OS needs to make sure the right resources are allocated to the requesting applications. This requirement is even more important in hybrid cloud environments. Therefore, any well-designed cloud environment must have well-defined APIs that allow an application or a service to be plugged into the cloud eas- ily. These interfaces need to be based on open standards to protect customers from being locked into one vendor’s cloud environment.
This book comprehensively debates on the emergence of mobile cloudcomputing from cloudcomputing models. Various technological and architectural advancements in mobile and cloudcomputing have been reported. It has meticulously explored the design and architecture of computational offloading solutions in cloud and mobile cloudcomputing domains to enrich mobile user experience. Furthermore, to optimize mobile power consumption, existing solutions and policies toward green mobile computing, green cloudcomputing, green mobile networking, and green mobile cloudcomputing are briefly discussed. The book also presents numerous cloud and mobile resource allo- cation and management schemes to efficiently manage existing resources (hardware and software). Recently, integrated networks (e.g., WSN, VANET, MANET) have sig- nificantly helped mobile users to enjoy a suite of services. The book discusses existing architecture, opportunities, and challenges, while integrating mobile cloud comput- ing with existing network technologies such as sensor and vehicular networks. It also briefly expounds on various security and privacy concerns, such as application security, authentication security, data security, and intrusion detection, in the mobile cloud com- puting domain. The business aspects of mobile cloudcomputing models in terms of resource pricing models, cooperation models, and revenue sharing among cloud pro- viders are also presented in the book. To highlight the standings of mobile cloud comput- ing, various well-known, real-world applications supported by mobile cloudcomputing models are discussed. For example, the demands and issues while deploying resource- intensive applications, including face recognition, route tracking, traffic management, and mobile learning, are discussed. This book concludes with various future research directions in the mobile cloudcomputing domain to improve the strength of mobile cloudcomputing and to enrich mobile user experience.
Enterprises that move their IT to the cloud are likely to encounter challenges such as security, interoperability, and limits on their ability to tailor their ERP to their business processes. The cloud can be a revolutionary technology, especially for small start-ups, but the benefi ts wane for larger enterprises with more complex IT needs [ 10 ]. The cloud model can be truly disruptive if it can reduce the IT opera- tional expenses of enterprises. Traditional utility services provide the same resource to all consumers. Perhaps the biggest difference between the cloudcomputing ser- vice and the traditional utility service models lies in the degree to which the cloud services are uniquely and dynamically confi gured for the needs of each application and class of users [ 12 ]. Cloudcomputing services are built from a common set of building blocks, equivalent to electricity provider turbines, transformers, and distri- bution cables. Cloudcomputing does, however, differ from traditional utilities in several critical respects. Cloud providers compete aggressively with differentiated service offerings, service levels, and technologies. Because traditional ERP is installed on your servers and you actually own the software, you can do with it as you please. You may decide to customize it, integrate it to other software, etc. Although any ERP software will allow you to confi gure and set up the software the way you would like, “Software as a Service” or “SaaS” is generally less fl exible than the traditional ERP in that you can’t completely customize or rewrite the soft- ware. Conversely, since SaaS can’t be customized, it reduces some of the technical diffi culties associated with changing the software. Cloud services can be com- pletely customized to the needs of the largest commercial users. Consequently, we have often referred to cloudcomputing as an “enhanced utility” [ 12 ]. Table 9.2 [ 5 ] shows the E-skills study for information and communications technology (ICT) practitioners conducted by the Danish Technology Institute [ 5 ] that describes the
Several different surveys on cloudcomputing in the logistics sector have been conducted in the past few months and published as studies. One of them was an online survey conducted by the software provider INFORM GmbH which showed that 68.3 % of the surveyed companies are ready right now to use cloudcomputing for logistics tasks — only 12.7 % have actually done it. The reasons for this are a lack of familiarity with the topic (29.5 %) and the security concerns mentioned by almost half of the surveyed companies. The possibility of having to rely on an external service provider was a barrier to using cloud technology for 13 % of the surveyed companies. The lack of industry-speci ﬁ c solutions was an obstacle for another 5 %. There seems to be a wide range of reasons. Flexible access (38 %), reduction in operating costs (25 %), faster implementation times for business processes (18 %), platform independence (12 %), and access to IT resources that would not be possible without cloudcomputing (7 %) were identi ﬁ ed as the ben- e ﬁ ts. According to the respondents, cloudcomputing solutions can be used for the communication between vendors and customers, controlling suppliers, and man- aging supply chain events. 25
As an emerging state-of-the-art technology, cloudcomputing has been applied to an extensive range of real-life situations. Health care service is one of such important application fields. We developed a ubiquitous health care system, named HCloud, after comprehensive evaluation of requirements of health care applications. It is provided based on a cloudcomputing plat- form with characteristics of loose coupling algorithm modules and powerful parallel computing capabilities that compute the details of those indicators for the purpose of preventive health care service. First, raw physiological sig- nals are collected from the body sensors by wired or wireless connections and then transmitted through a gateway to the cloud platform, where storage and analysis of the health status are performed through data-mining tech- nologies. Last, results and suggestions can be fed back to the users instantly for implementing personalized services that are delivered via a heteroge- neous network. The proposed system can support huge physiological data storage; process heterogeneous data for various health care applications, such as automated electrocardiogram (ECG) analysis; and provide an early warn- ing mechanism for chronic diseases. The architecture of the HCloud platform for physiological data storage, computing, data mining, and feature selections is described. Also, an online analysis scheme combined with a Map-Reduce parallel framework is designed to improve the platform’s capabilities. Performance evaluation based on testing and experiments under various conditions have demonstrated the effectiveness and usability of this system.
Some may characterize movement to the cloud as ascending but still a far way off. Vendors and others portray that an organization is behind the times if it is not fully embracing the cloud. That being said, and given that most enormous companies don’t like to change something that functions (e.g., many around the world are still interacting with mainframes), it’s difﬁcult to see companies looking to instantly rip out something that is currently part of their IT structure. This indicates that business and IT will be providing several on-premises and off-premises IT choices with the off-premises-based IT choices incrementally and gradually improving in ﬂexibility, capabilities, and functionality. But no matter what this pattern of clouds and hybridization represents within IT, organizations right now are suffering from a signiﬁcant task of managing and complying with security concerns. To cope with increasing demand and still protect the company’s data, IT must have end-to-end knowledge and administration over clients, plans, servers, and devices. This will allow IT to ensure that the business is properly secured while still being nimble enough to easily provide solutions to changing business conditions. Typically, this involved getting and obtaining on-premises devices, servers, and plans. Now the same type of security capabilities must be utilized for IT technologies that are outside the trusted environment and are not directly managed by the IT department.
Closer to our goal, existing works in the area of cloud service optimization have mainly focused on the modeling and assessment of quantitative (precise) characteris- tics to enable automatic service optimization in infrastructure layer. For instance, CloudCmp has proposed a measurement methodology for quantifying and comparing the performance of cloud services in IaaS layer. In that respect, authors have first identified common services e.g. elastic compute cluster or persistent storage, offered by different providers that can be subject to comparison. Then, for each service, they have defined a set of low-level performance metrics such as benchmark finishing time, costs and scaling latency. Similarly, Han et al.  have proposed a service recommender framework using network QoS and Virtual Machine (VM) platform factors for assisting user's decisions when it comes to the selection of cloud provider. In their work, they do not consider user preferences and they limit their evaluation criteria only to IaaS specific factors. In an effort to provide automatic cloud service adaptation across different cloud platforms, Pawluk et al.  have presented the STRATOS cloud brokerage framework which addresses the problem of dynamically selecting resources from multiple cloud providers at runtime by calculating the in- duced costs and lock-in effect using a quantitative model. These approaches focus on the service optimization in IaaS layer and they do not address the problem of service evaluation in multiple quantitative and qualitative dimensions taking into account the uncertainty or vagueness.
This makes the balance of emails hitting inboxes of IT directors all the stranger; a vast number of people, with salespeople in the vanguard, are attaching the word cloud to their sales pitches as if it’s their road to personal salvation. In this new world the cloud is the outsourcing industry’s wholehearted attack on the massive budgets and imposing empires of corporate IT. That’s why less tech-savvy board colleagues love it so much. It isn’t easy for an IT director to reject all these overtures. Among the ambitious Powerpoint presentations there’ll be a quiet voice from a clever guy who promises to chop your re-investment costs by 75%, without moving a corporate dataset outside your building. That last guy is
In addition to these concerns, there is the issue of data preservation. Absent some form of regulation or mutual agreement within the IT industry, and specifically among those who are major cloud-services providers, there is no requirement to preserve the photos, email, videos, postings, data, and flies that individuals and organizations believe are securely stored in data centers around the world. As a result, much of the digital evidence from the daily lives of individuals and the decisions and activities of organizations will vaporize, irrespective of how many cloud data centers fill the world. As one concerned tech writer argued, “We’re really good at making things faster, smaller, and cheaper. And every step along the way makes for great headlines. But we’re not nearly so good at migrating our digital stuff from one generation of tech to the next. And we’re horrible at coming up with business models that assure its longevity and continuity” (Udell 2012). Another person who has been active in the online world for years, hosting numerous sites and archives, worried, “Not to be dramatic or anything, but no more than forty days after I die, and probably much sooner, all the content I am hosting will disappear” (Winer, quoted in ibid.). To date, the only reason most of this material has been preserved is due to the heroic efforts of individuals who personally port archives when technology and standards change. Referring to several archives dating from the turn of this century, Udell commented in a Wired column, “If I hadn’t migrated them, they’d already be gone. Not because somebody died, it’s just that businesses turned over or lost interest and the bits fell off the web. Getting published, it turns out, is a lousy way to stay published. With all due respect to wired.com, I’ll be amazed if this column survives to 2022 without my intervention” (ibid.). There are some efforts, primarily by governments, to archive and preserve flies. The most notable of these may be at the U.S. Library of Congress, which, among other things, is archiving the massive database of Twitter postings. These are all important activities, but they are isolated and much more data disappears than is preserved. Of course, one can argue, there is a great deal of digital content that is not worth paying to preserve. Society has survived in the past without carrying forward from generation to generation the entire weight of the historical record. Nevertheless, since most of that record is now digital, is it not worthwhile to develop strategies to preserve at least some of it in a systematic fashion?
When working at scale, as you are likely to do with a private cloud implementation, strongly consider standardization of your server hardware models and purchasing groups of serv- ers together. Not only does this approach guarantee you’ll have compatible CPU generations and identical hardware, it makes your deployment process simpler. You can use tools like Autodeploy and host profi les to deploy and redeploy your servers. Likewise, using DHCP rather than static IP addressing schemes for vSphere servers becomes more appealing. vSphere 5.1 with Autodeploy also allows you to deploy stateless vSphere hosts, where each node is booted from the network using a Trivial File Transfer Protocol (TFTP) server. The host downloads the vSphere hypervisor at boot-time and runs it in RAM; then it downloads its confi guration from the Autodeploy server.
The massive amount of digital data on the web has brought a number of challenges such as prohibitively high costs in terms of storing and delivering the data. Image compression therefore involves reducing the amount of memory it takes to store an image in order to reduce these large costs. Imageprocessing generally, is a very compute intensive task. Taking into consideration the image representation and quality, systems or application/techniques for imageprocessing must have special capabilities for unquestionable results in their imageprocessing. The rapid growth of digital imaging applications, including desktop publishing, multimedia, teleconferencing, computer graphics and visualization and high definition television (HDTV) has increased the need for effective image compression techniques. Many imageprocessing algorithms require dozens of floating point computations per pixel, which can result in slow runtime even for the fastest of CPUs. Because of the need for high performance computing in today’s computing era, this paper presents a study based on Cordic based Loeffler DCT for image compression using CUDA. Imageprocessing takes the advantage of CUDA processing because of the parallelism that pixels exhibit in an image and that can be offered by the CUDA architecture. In that way the CUDA architecture is found to be relevant for imageprocessing , ,  and . This paper presents a performance based evaluation of the DCT image compression technique on both the CPU and GPU using CUDA. The focus of the paper is on the Cordic based Loeffler DCT. We begin by giving a brief review of literature (background), followed by a description of the research methodology employed, including an overview of image compression, GPUs, CUDA architecture, Cordic based Loeffler DCT and related theory. This is then followed by the results of
The certification authority performs authentication of mobile devices in this scheme.  Proposes a framework that combines TrustCube  and implicit authentication ,  to authenticate mobile users. Implicit authentication or behavioral authentication uses habits to authenticate users and maps mobile user behaviors to score. TrustCube is a policy-based cloud authentication and supports the integration of various authentication methods. Also authors in  proposed asymmetric encryption for providing better security. They utilized user location for providing a rigid authentication, but we know that asymmetric encryption has high power consumption and bandwidth. In  authors utilized secret sharing concept to propose a rigid data security framework. Their security framework is robust against related security attacks. Finally in  Momeni proposed a lightweight and efficient authentication protocol for mobile cloud environments. His proposed scheme is enough strong against related attacks and it is according to real communication scenarios.
SDN has two main advantages over traditional networks in regards to detection and response to attacks: (1) the (logically) centralized management model of SDN allows administrators to quickly isolate or block attack traffic patterns without the need to access and reconfigure several heterogeneous hardware (switches, routers, firewalls, and intrusion detection systems); (2) detection of attacks can be made a distributed task among switches (SDN controllers can define rules on switches to generate events when flows considered malicious are detected), rather than depending on expensive intrusion detection systems. SDN can also be used to control how traffic is directed to network monitoring devices (e.g., intrusion detection systems) as proposed in . Quick response is particularly important in highly dynamic cloud environments. Traditional intrusion detection systems (IDS) mainly focus on detecting suspicious activities and are limited to simple actions such as disabling a switch port or notifying (sending email) to a system administrator. SDN opens the possibility of taking complex actions such as changing the path of suspicious activities in order to isolate them from known trusted communication. Research will focus on how to recast existing IDS mechanisms and algorithms in SDN contexts, and development of new algorithms to take full advantage of multiple points of action. For example, as each switch can be used to detect and act on attacks,  has shown the improvement of different traffic anomaly detection algorithms (Threshold Random Walk with Credit Based rate limiting, Maximum Entropy, network traffic anomaly detection based on packet bytes, and rate limiting) using Openflow and NOX by placing detectors closer to the edge of the network (home or small business networks instead of the ISP) while maintaining the line rate performance.
There’s growing sentiment among many cloud experts that ultimately hybrid adoption will be most ad- vantageous for many organizations. Warrilow says “for some time Gartner has advised that hybrid is the most likely scenario for most organiza- tions.” Staten agrees with the notion for two reasons. First, “some appli- cations and data sets simply aren’t a good fit with the cloud,” he says. This might be due to application architec- ture, degree of business risk (real or perceived), and cost, he says. Second, rather than making a cloud-or-no- cloud decision, “it’s more practical and effective to leverage the cloud for what makes the most sense and other deployment options where they make the most sense,” he says. In terms of strategy, Staten recommends regularly analyzing deployment decisions. “As cloud services mature, their applica- bility increases,” he says.