with specialized connections to spread data-processing chores across them. By contrast, the newest and most powerful desktop PCs process only about 3 billion computations a second. Let's say you're an executive at a large corporation. Your particular responsibilities include making sure that all of your employees have the right hardware and software they need to do their jobs. Buying computers for everyone isn't enough -you also have to purchase software or software licenses to give employees the tools they require. Whenever you have a new hire, you have to buy more software or make sure your current software license allows another user. It's so stressful that you find it difficult to go to sleep on your huge pile of money every night. Installing a suite of software for each computer, you'd only have to load one application. That application would allow workers to log into a Web-based service which hosts all the programs the user would need for his or her job. Remote machines owned by another company would run everything from e-mail to word processing to complex data analysis programs. It's called cloudcomputing, and it could change the entire computer industry.
Cloudcomputing services are useful for students as well as teachers. These are virtual machines (VM) that are changing the whole world of education in a good way. We use cloudcomputing for the education infrastructure, application, platform, and Software as a Service (SaaS). This service is helping to build Education as a Service (EaaS). EaaS is a specially designed service for online education (synchronized/unsynchronized). Cloudcomputing services are managing all the infrastructure, study services, study material and inventory. The services can be designed according to the machine the service is able to run on like the laptop, mobile, palmtop, personal computers or servers without internet connectivity. The service can also run from school/college servers, a data centre or third party servers that are accessed via the Internet. EaaS provides the updated tools that are useful for short operations like editing, inserting, deleting, etc. A private cloud can be best for establishing the EaaS at very low cost. EaaS is designed especially for education; that’s why the purpose of this service is to be cost -effective, secure, reliable and flexible. The institution can totally rely on this service; everything is designed under the institution’s circumstances. EaaS stores lesson plans for various subjects (as data storage) in private clouds that allow teachers and students to access the files anywhere and at anytime.
In 2010Federal CIO Vivek Kundra began an ambitious plan to dramatically reduce IT operations distributed among more than 1,100 data centers. As a part of the initial inventory phase it was discovered there were actually over 2,000 data centers in existence. Several other cloud projects were initiated at government branches, such as the FCC utilizing a Terremark (later acquired by Verizon) Infrastructure as a Service (IaaS) offering that would give them on-demand access to computing resources. Other projects from government organizations or large technology companies like HP and Intel have demonstrated the benefits of using innovative data center design to consolidate data centers. Virtualization
between big data and cloudcomputing, big data storage systems, and Hadoop technology are discussed. Furthermore, research challenges are discussed, with focus on scalability, avail- ability, data integrity, data transformation, data quality, data heterogeneity, privacy, legal and regulatory issues, and governance. Several open research issues that require substantial research efforts are likewise summarized. Cloudcomputing is an extremely successful paradigm of service oriented computing, and has revolutionized the way computing infrastructure is abstracted and used. Three most popular cloud paradigms include: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The concept a however can also be extended to Database as a Service or Storage as a Service. Elasticity, pay-per-use, low upfront investment, low time to market, and transfer of risks are some of the major enabling features that make cloudcomputing a ubiquitous paradigm for deploying novel applications which were not economically feasible in a traditional enterprise infrastructure settings. This has seen a proliferation in the number of applications which leverage various cloud platforms, resulting in a tremendous increase in the scale of the data generated as well as consumed by such applications. Scalable database management systems (DBMS)—both for update intensive application workloads, as well as decision support systems—are thus a critical part of the cloud infrastructure. Scalable and distributed data management has been the vision of the database research community for more than three decades. Much research has focused on designing scalable systems for both update intensive workloads as well as ad hoc analysis workloads. Initial designs include distributed databases  for update intensive workloads, and parallel database systems 19] for analytical workloads. Parallel databases grew beyond
A generic Fog computing architecture is shown in ﬁgure 2. It presents a hierarchical structure. The bottommost layer encompasses wireless, smart, mobile or ﬁxed end-user‘s objects such as sensors, robots, smart phones, and cameras. Components from this layer use the above layer to connect with other elements (in the same layer) as well as with IoT services implemented in both network and Cloud layers. The network layer covers several sub-layers (network‘s edge, aggregation and core). It involves network components presents a hierarchical structure such as gateways, switches, routers, PoPs and base stations. This layer is used also for hosting IoT applications that require low latency as well as performing data aggregation, ﬁltering and pre-processing before sending to the Cloud.
computing and services to the edge of the network. Like Cloud, Fog provides data, compute, storage, and application services to end-users. Fog Computing terminology is given by Cisco that implies extending cloudcomputing to the edge of a network. Broadly called Edge Computing or preparatory, fog computing reinforces the operation of cloud, storage and networking services between end devices and conveyed processing data centers. Fog computing is a gifted computing aspect that protract cloudcomputing to the edge of networks. Similar to cloudcomputing with distinct characteristics, fog computing faces new-fangled security, privacy and trust issues, control information overhead and network control policies resist other than those obtained from cloudcomputing. One of those hurdle is data trimming. Because redundant communications not only burden the core network but also the data center in the cloud. For this purpose, data can be preprocessed and trimmed before sending to the cloud. This can be done through a Smart Gateway, accompanied with a Smart Network or Fog Computing. We have reviewed these defies and prospective plans briefly in this paper. We have provided a state-of-the-art survey of Fog computing, its challenges and security issues.
Mobile devices refer to miniaturized personal computers (PCs)  in the form of pocket PCs, tablet PCs, and smart phones. They provide optional and portable ways for users to experience the computingworld. Mobile devices  are also becoming the most frequently used terminal to access information through the Internet and social networks. A mobile applica- tion (mobile app) [4,6] is a software application designed to run on mobile devices. Mobile apps such as Apple App Store (http://store.apple.com/ us), Google Play (https://play.google.com/store?hl = en), Windows Phone Store (http://www.windowsphone.com/en-us/store), and BlackBerry App World (http://appworld.blackberry.com/webstore/?) are usually operated by the owner of the mobile operating system. Original mobile apps were for gen- eral purposes, including e-mail, calendars, contacts, stock market informa- tion, and weather information. However, the number and variety of apps are quickly increasing to other categories, such as mobile games, factory automation, global positioning system (GPS) and location-based services, banking, ticket purchases, and multimedia applications. Mobile multimedia applications are concerned with intelligent multimedia techniques to facili- tate effort-free multimedia experiences on mobile devices, including media acquisition, editing, sharing, browsing, management, search, advertising, and related user interface . However, mobile multimedia service still needs to meet bandwidth requirements and stringent timing constraints .
In other cases, the loss of control of where your virtual IT infrastructure resides could open the way to other problematic situations. More precisely, the geographical location of a datacenter gen- erally determines the regulations that are applied to management of digital information. As a result, according to the specific location of data, some sensitive information can be made accessible to government agencies or even considered outside the law if processed with specific cryptographic techniques. For example, the USA PATRIOT Act 5 provides its government and other agencies with virtually limitless powers to access information, including that belonging to any company that stores information in the U.S. territory. Finally, existing enterprises that have large computing infra- structures or large installed bases of software do not simply want to switch to public clouds, but they use the existing IT resources and optimize their revenue. All these aspects make the use of a public computing infrastructure not always possible. Yet the general idea supported by the cloudcomputing vision can still be attractive. More specifically, having an infrastructure able to deliver IT services on demand can still be a winning solution, even when implemented within the private premises of an institution. This idea led to the diffusion of private clouds, which are similar to pub- lic clouds, but their resource-provisioning model is limited within the boundaries of an organization.
This book comprehensively debates on the emergence of mobile cloudcomputing from cloudcomputing models. Various technological and architectural advancements in mobile and cloudcomputing have been reported. It has meticulously explored the design and architecture of computational offloading solutions in cloud and mobile cloudcomputing domains to enrich mobile user experience. Furthermore, to optimize mobile power consumption, existing solutions and policies toward green mobile computing, green cloudcomputing, green mobile networking, and green mobile cloudcomputing are briefly discussed. The book also presents numerous cloud and mobile resource allo- cation and management schemes to efficiently manage existing resources (hardware and software). Recently, integrated networks (e.g., WSN, VANET, MANET) have sig- nificantly helped mobile users to enjoy a suite of services. The book discusses existing architecture, opportunities, and challenges, while integrating mobile cloud comput- ing with existing network technologies such as sensor and vehicular networks. It also briefly expounds on various security and privacy concerns, such as application security, authentication security, data security, and intrusion detection, in the mobile cloud com- puting domain. The business aspects of mobile cloudcomputing models in terms of resource pricing models, cooperation models, and revenue sharing among cloud pro- viders are also presented in the book. To highlight the standings of mobile cloud comput- ing, various well-known, real-world applications supported by mobile cloudcomputing models are discussed. For example, the demands and issues while deploying resource- intensive applications, including face recognition, route tracking, traffic management, and mobile learning, are discussed. This book concludes with various future research directions in the mobile cloudcomputing domain to improve the strength of mobile cloudcomputing and to enrich mobile user experience.
Enterprises often don’t have the required expertise to build cloud-based solutions. The average medium-to-large company that has been in business for more than a few years typically has a collection of applications and services spanning multiple eras of application architecture from mainframe to client-server to commercial-off the-shelf and more. The majority of the skills internally are specialized around these different architectures. Often the system administrators and security experts have spent a lifetime working on physical hardware or on-premises virtualization. Cloud architectures are loosely coupled and stateless, which is not how most legacy applications have been built over the years. Many cloud initiatives require integrating with multiple cloud-based solutions from other vendors, partners, and customers. The methods used to test and deploy cloud-based solutions may be radically different and more agile than what companies are accustomed to in their legacy environments. Companies making a move to the cloud should realize that there is more to it than simply deploying or paying for software from a cloud vendor. There are significant changes from an architectural, business process, and people perspective. Often, the skills required to do it right do not exist within the enterprise.
Virtualisation technology appeared several years ago; it comes in many types, all focusing on control and usage schemes that emphasise efficiency. This efficiency is seen as a single terminal being able to run multiple machines or a single task run- ning over multiple computers via idle computing power. Adoption within data cen- tres and adoption by service providers is increasing rapidly and encompasses different proprietary virtualisation technologies. Again, the lack of standardisation poses a barrier to an open standards cloud that is interoperable with other clouds, and a broad array of computing and information resources is fundamentally imple- mentable. As the availability of requested resources by users poses a crucial param- eter for the adequacy of the service provided, one of the major deployments of the cloud application paradigm is the virtual data centres (VDC), utilised by service providers  by enabling a virtual infrastructure (Fig. 6.6) in a distributed manner in various remotely hosted locations worldwide to provide accessibility  and backup services and ensure reliability in case of a potential single site failure. In the case of resource saturation or resource dismissal, where a certain location-based resource cannot be accessed, the VDC claims the resource in order to enable avail- ability to potential requests/users. Additionally, these services with globally assigned operations require faster response time by distributing workload requests to multiple VCDs using certain scheduling and load-balancing methodologies. Therefore, as an optimal approach to resource availability, a k-rank model  can be applied in order to rank the requests and resources and create outsourcing ‘connectivity’ to potential request.
With the ever-increasing prices of real estate, the virtual ofﬁce has become an attractive phenomenon to many businesspeople. Compa- nies can be located at a particular place but the employees conduct their day-to-day ofﬁce work from different locations such as a hotel, café, or home. New technologies such as Web 2.0, wiki, chat, forums, tags, and RSS (Rich Site Summary/Really Simple Syndication) have enabled teams to maintain effective communication and content collaboration. These technologies have not only broken the barrier of physical infrastructure but have also provided business agility to launch any business with low cost and expand existing businesses globally with minimal cost of operation. We are living and working in the twenty-ﬁrst century era in which Thomas L. Friedman, the author of The World Is Flat: A Brief History of the Twenty-First Century, says that the playing ﬁeld is being leveled. Globalization has removed all barriers, supported by technological innovation in the areas of information and communication technology (ICT) and transportation. This helped ﬁrst with outsourced manufacturing and assembly in remote locations with low-cost labor supplies, primarily in India and China, to bring down the cost of assembly, manufacturing, or services in the areas of call centers, medical transcription, accounting, legal, publi- cation media, and ﬁlms. And, subsequently, business process out- sourcing (BPO) or better-deﬁned as knowledge process outsourcing (KPO) evolved in all areas of jobs and at various locations, providing the business agility and efﬁciency needed to run business operations smoothly.
Abstract : Cloudcomputing in a narrow sense is technically different from the traditional server-client model, grid computing, etc., the systems architecture of the software systems involved in the delivery of cloudcomputing, typically involves multiple cloud components communicating with each other over application programming interfaces, usually web services. Conceptually speaking, there isn't that much difference with other server-server-client models. Cloudcomputing is an Internet-based computing service provided by the third party allowing share of resources and data among devices. It is widely used in many organizations nowadays and becoming more popular because it changes the way of how the Information Technology (IT) of an organization is organized and managed. While there is increasing use of cloudcomputing service in this new era, the security issues of the cloudcomputing become a challenges. This paper firstly lists out the architecture of the cloudcomputing, then discuss the most common security issues of using cloud and some solutions to the security issues since security is one of the most critical aspect in cloudcomputing due to the sensitivity of user’s data.
Abstract:- By using Internet technology cloud provides virtualized IT resources as a service. CloudComputing is a combination of Grid computing and Cluster computing. By using the Internet a computer grid is created whose purpose is only utilizing shared resources such as on a pay- per-use model, computer software and hardware. The main moto of cloudcomputing is that you can access your data in any corner of the world by using internet. Cloudcomputing is a general term for delivering through the internet. Cloudcomputing is a virtualized computer power and storage delivered via platform-agnostic infrastructures of abstracted hardware and software access over internet. Cloudcomputing systems usually work on various models like public, private, hybrid, and community models.
If you look at the service stack from the top down, you can see some of the value the other layers provide. At the very top are business services such as Dunn and Bradstreet, which provides analysis and insight into companies that you might potentially do business with. Other examples of business services are credit reporting and banking. Providing business services such as these requires data stores for storing data. However, a relational database by itself is not sufficient: The data retrieval and storage methods must be integrated into programs that can provide user inter- faces for people can use. Relational databases also need to be maintained by database administra- tors who archive and back up data. This is where Platform as a Service comes in. Platform as a Service provides all the services that enable systems to run by themselves, including scaling, failover, performance tuning, and data retrieval. For example, the Salesforce Force.com platform provides a data store where your programs can store and retrieve data without you ever needing to worry about database or system administration tasks. It also provides a web site with graphical tools for defining and customizing data objects. IBM Workload Deployer is another Platform as a Service that runs on an infrastructure as a service cloud but is aware of the different software run- ning on individual virtual machines; it can perform functions such as elastic scaling of applica- tion server clusters.
Lean agile development methodologies and the cloud model complement each other very well. Cloud services take pride in meeting user requirements rapidly, delivering applications whenever and to whatever extent they are needed. Agile methods give high credence to user collaboration in requirements discovery. The lean agile system of software development aims to break down project require- ments into small and achievable segments. This approach guarantees user feedback on every task of the project. Segments can be planned, developed, and tested individually to maintain high-quality standards without any major bottlenecks. The development stage of every component thus becomes a single “iteration” process. Moreover, lean agile software methods place huge emphasis on developing a collaborative relationship between application developers and end users. The entire development process is transparent to the end user and feedback is sought at all stages of development, and the needy changes are made accordingly then and there. Using lean agile development in conjunction with the cloud paradigm provides a highly interactive and collaborative environment. The moment developers fi nalize a feature, they can push it as a cloud service; users can review it instantly and provide valuable feedback. Thus, a lengthy feedback cycle can be eliminated thereby reducing the probability of misstated or misunderstood requirements. This considerably curtails the time and efforts for the software development organization while increasing end-user satisfaction. Following the lean agile approach of demand- driven production, end users ’ needs are integrated in a more cohesive and effi cient manner with software delivery as cloud services. This approach stimulates and sustains a good amount of innovation, requirement discovery, and validation in cloudcomputing.
One of the critical questions for channel companies to answer is whether or not cloud makes sense from an ROI perspective and if so, in what capacity and in which customer scenarios. This basic “economics of the cloud” discussion has been front-‐and-‐center in the channel for the better part of the last three to five years. The conversation is complicated, due in large part to the wide variety of cloud business model options and potential revenue structures to explore as well as differing customer needs. And yet, we are seeing solution providers move more decisively. Nearly 6 in 10 said they proactively pursued multiple segments of the various cloud business models in an attempt to quickly and comprehensively enter the cloud market, with medium and larger firms more likely to have gone this route than the smallest channel player (see Section 3 of this report for a detailed discussion of business models). As a result, a segment of companies have assembled quantifiable tracking metrics on revenue and profit margin, which can serve as a guidepost for channel companies moving more slowly into cloud.
The scheduling scenario proceeds as follows: once a scheduling agent receives a task, it attaches it to one of its service queues (see Fig. 7.5). Tasks are received either by negotiating with other agents or directly from a workﬂow agent. The negotiation protocol is similar with the one in Fig. 7.6 and uses the DMECT SA’s relocation condition (Frincu, 2009a) as described in Section 7.5.2. Each service can execute at most k instances simultaneously. Variable k is equal to the number of processors inside the node pair. Once sent to a service a task cannot be sent back to the agent unless explicitly speciﬁed in the scheduling heuristics. Tasks sent to services are scheduled inside the resource by using the MinQL SA which uses a simple load balancing technique. Scheduling agents periodically query the service for completed tasks. Once one is found the information inside it is used to return the result to the agent responsible for the workﬂow instance. This passes the information to the engine which in turn passes the consequent set of tasks to the agent for scheduling. In order to simulate the cloud heterogeneity in terms of capabilities services offer different functionalities. In our case services offer access to both CASs and image processing methods. As each CAS offers different functions for handling mathematical problems so does the service exposing it. The same applies for the image processing services that do not implement all the available methods on every service. An insight on how CASs with different capabilities can be exposed as services is given in (Petcu, Carstea, Macariu, & Frincu, 2008).
A cloud OS should provide the APIs that enable data and services interoper- ability across distributed cloud environments. Mature OSs provide a rich set of services to the applications so that each application does not have to invent important functions such as VM monitoring, scheduling, security, power management, and memory management. In addition, if APIs are built on open standards, it will help organizations avoid vendor lock-in and thereby creating a more flexible environment. For example, linkages will be required to bridge traditional DCs and public or private cloud environments. The flex- ibility of movement of data or information across these systems demands the OS to provide a secure and consistent foundation to reap the real advan- tages offered by the cloudcomputing environments. Also, the OS needs to make sure the right resources are allocated to the requesting applications. This requirement is even more important in hybrid cloud environments. Therefore, any well-designed cloud environment must have well-defined APIs that allow an application or a service to be plugged into the cloud eas- ily. These interfaces need to be based on open standards to protect customers from being locked into one vendor’s cloud environment.
In 1997, Professor Ramnath Chellappa of Emory University, defined cloudcomputing for the first time while a faculty member at the University of South California, as an important new “computing paradigm where the boundaries of computing will be determined by economic rationale rather than technical limits alone.” Even though the international IT literature and media have come forward since then with a large number of definitions, models and architectures for cloudcomputing, autonomic and utility computing were the foundations of what the community commonly referred to as “cloudcomputing”. In the early 2000s, companies started rapidly adopting this concept upon the realization that cloudcomputing could benefit both the Providers as well as the Consumers of services. Businesses started delivering computing functionality via the Internet, enterprise- level applications, web-based retail services, document-sharing capabilities and fully-hosted IT platforms, to mention only a few cloudcomputing use cases of the 2000s. The latest widespread adoption of virtualization and of service- oriented architecture (SOA) promulgated cloudcomputing as a fundamental and increasingly important part of any delivery and critical-mission strategy, enabling existing and new products and services to be offered and consumed more efficiently, conveniently and securely. Not surprisingly, cloudcomputing became one of the hottest trends in the IT armory, with a unique and complementary set of properties, such as elasticity, resiliency, rapid provisioning, and multi-tenancy.