Today, it can be safely argued that the greater use of cloudcomputing services in organizations will NOT marginalize IT departments. On the contrary, the role and responsibilities of IT will rather increase in the future. One thing looks certain and that is, with the emergence of cloudcomputing, the fundamental relationship between IT departments and their supported businesses will change within organizations. IT departments along with business groups and third party providers will need to collaborate and form an integral relationship. Also, on its part, IT will need to step up to meet all new challenges due to this alliance, while maintaining existing infrastructure and operations.
The recent trend in the computing technology has forced many companies to shift to cloud to the increase in business, the data generated is also increasing day by day. All the data that is generated are stored in remote servers called data centers. Operation, there are around 8 million data centers d the world. These data centers are consumers of a large amount of electricity. These data centers pollute the air by emitting a large amount of carbon dioxide. They also generate a lot of heat, so there is a requirement for a large number of cooling systems to cool the data centers which in turn consume a large amount of electricity. This has a negative effect on our environment and it also increases the operational costs. These issues have now become a major concern for cloud service providers. The environmental impacts of these data centers have now become a challenge for many cloud service providers. Companies like Apple, Facebook have already started implementing methods to reduce the carbon footprint. The ultimate aim of green cloudcomputing is to reduce the negative effects of data centers on the environment as well as the operational costs. In this research firstly, the importance and the need of green cloud is discussed. Secondly, the proposed techniques and the proposed green cloud framework are discussed.
Cloudcomputing is one of the emerging technology in the presentscenario. Storing and retrieving of data are done under the remote server. Data may contain financial information, business information and personal data. Thesedata are stored and managed by third party service providers. Amazon (AWS), IBM Clouds, Google are some of the cloud service providers.
Both cloudcomputing and SOA share some core principles. First, both rely on the service concept to achieve the objectives. Service is a functionality or a fea- ture offered by one entity and used by another. For example, a service could be retrieving the details of the online bank account of a user. SOA and cloudcomputing use service delegation in that the required task is delegated either to service provider (in the case of cloudcomputing) or to other application or business components in the enterprise (in the case of SOA). Service delega- tion helps the people to use the services without being concerned about the implementation and maintenance details. Services could be shared by multi- ple applications and users, thereby achieving optimized resource utilization. Second, both cloudcomputing and SOA promote loose coupling among the components or services, which ensures the minimum dependencies among different parts of the system. This feature reduces the impact that any single change on one part of the system makes on the performance of the overall system. Loose coupling helps the implemented services to be separated and unaware of the underlying technology, topology, life cycle, and organiza- tion. The various formats and protocols used in distributed computing, such as XML, WSDL, Interface Description Language (IDL), and Common Data Representation (CDR), help to achieve the encapsulation of technology dif- ferences and heterogeneity among the various components used for combin- ing a business solution for solving the computing problems. Various services should be location and technology independent in cloudcomputing, and SOA can be used for achieving this transparency in the cloud domain.
Distributed computing is the arrangement of orders, advancements, and plans of action used to convey IT abilities (programming, equipment, individuals) as an on- request, versatile, flexible administration (CloudComputing, 2011). Vaquero et al. (2009) considered 22 meanings of distributed computing n recommended that Clouds are an enormous pool of effectively usable and available virtualized assets (equipment, improvement stages as well as services).In a distributed computing condition, the conventional job of the authority association has isolated into two: the establishment providers who regulate cloud stages n lease resources as shown by an utilization based esteeming ideal, n master communities, who rent resources from one or various structure providers to serve the end customers. The entire appropriated figuring structure is subject to three obstructs that are stacked one over the other. At the base is Infrastructure-as-an administration (IaaS) that gives fundamental equipment parts (Central Processing Units (CPU's), memory, and capacity).The second is PaaS that gives programming designers a stage to creating, testing, sending and facilitating of web applications and at the top is Software-as-a-Service that gives prepared to utilize applications to associations. Dispersed computing administrations are demonstrating to be a resource for the associations. As indicated by cloud specialist organizations, the matter of distributed computing will increment numerous folds in not so distant future. Dispersed computing can perhaps turn into a lead in evolving a innocent, simulatedneconomicallyviable IT arrangement later on (Nazir, 2012). Rimai et al. (2009) suggested that cloudcomputing takes been considered as a knowledge which is aheadimpetus at a very disturbing rate. Ta-Tao et al. (2015) expressed that it has become the association to react to clients. Bricklayer and George (2011) communicated that the advancement of distributed computing may altogether influence the gathering and maintenance of computerized proof. As per Kumar and Goudar (2012), distributed processing exercises could impact the endeavors inside a couple of years as it can basically change IT.
• Greening initiatives. Recently, companies are increasingly looking for ways to reduce the amount of energy they consume and to reduce their carbon footprint. Data centers are one of the major power consumers; they contribute consistently to the impact that a company has on the environment. Maintaining a data center operation not only involves keeping servers on, but a great deal of energy is also consumed in keeping them cool. Infrastructures for cooling have a significant impact on the carbon footprint of a data center. Hence, reducing the number of servers through server consolidation will definitely reduce the impact of cooling and power consumption of a data center. Virtualization technologies can provide an efficient way of consolidating servers. • Rise of administrative costs. Power consumption and cooling costs have now become higher than the cost of IT equipment. Moreover, the increased demand for additional capacity, which translates into more servers in a data center, is also responsible for a significant increment in administrative costs. Computers—in particular, servers—do not operate all on their own, but they require care and feeding from system administrators. Common system administration tasks include hardware monitoring, defective hardware replacement, server setup and updates, server resources monitoring, and backups. These are labor-intensive operations, and the higher the number of servers that have to be managed, the higher the administrative costs. Virtualization can help reduce the number of required servers for a given workload, thus reducing the cost of the administrative personnel.
as its stack, it might not have been able to achieve the scalability that it achieved on AWS. This by no means is a knock on Google or a declaration that AWS is any better than Google. Simply put, for scaling requirements like Instagram’s, an IaaS provider is a better choice than a PaaS. PaaS providers have thresholds that they enforce within the layers of their architecture to ensure that one customer does not consume so many resources that it impacts the overall platform, resulting in performance degradation for other customers. With IaaS, there are fewer limitations, and much higher levels of scalability can be achieved with the proper architecture. We will revisit this use case in Chapter 5. Architects must not let their loyalty to their favorite vendor get in the way of making the best possible business decision. A hammer may be the favorite tool of a home builder, but when he needs to turn screws he should use a screwdriver. Recommendation: Understand the differences between the three cloud service models: SaaS, PaaS, and IaaS. Know what business cases are best suited for each service model. Don’t choose cloud vendors based solely on the software stack that the developers use or based on the vendor that the company has been buying hardware from for years.
Cloud multimedia rendering as a service  is a promising category that has the potential of significantly enhancing the user multimedia experience. Despite the growing capacities of mobile devices, there is a broadening gap with the increasing requirements for 3D and multiview rendering tech- niques. Cloud multimedia rendering can bridge this gap by conducting rendering in the cloud instead of on the mobile device. Therefore, it poten- tially allows mobile users to experience multimedia with the same qual- ity available to high-end PC users . To address the challenges of low cloud cost and network bandwidth and high scalability, Wang et al.  pro- posed a rendering adaptation technique, which can dynamically vary the richness and complexity of graphic rendering depending on the network and server constraints, thereby impacting both the bit rate of the rendered video that needs to be streamed back from the cloud server to the mobile device and the computation load on the cloud servers. Zhu et al.  empha- sized that the cloud equipped with GPU can perform rendering due to its strong computing capability. They categorized two types of cloud-based rendering: (1) to conduct all the rendering in the cloud and (2) to conduct only computation-intensive part of the rendering in the cloud while the rest would be performed on the client. More specifically, an MEC with a proxy can serve mobile clients with high QoE since rendering (e.g., view interpolation) can be done in the proxy. Research challenges include how to efficiently and dynamically allocate the rendering resources and design a proxy for assisting mobile phones on rendering computation.
Till this section, we understood cloudcomputing and its models. Now in this section we present security in regard to IaaS. Most admins will be comfortable and familiar with IaaS because it is similar to work that we do in data centres . We save on energy cost by deploying server consolidation plan to reduce physical server footprint in data centre. After server consolidation, cloud features like – self-service, automation is used. But before these features are actually used, various security implications of IaaS need to be considered. Security issues are varied depending on whether we use public cloud or private cloud implementation of IaaS. With private cloud, we have control over solutions from top to bottom. With IaaS in public cloud, we control VMs and services running on VMs. For both scenarios, we consider the following security issues:
CloudComputing revolutionize the way how Internet is used by providing everything as a service (EaaS) on pay per usage basis. Even though cloud offers a multitude of benefits to individuals and organizations, cloud is under high risk of attack and one such attack that can cause a major breach in security is DoS or DDoS attack. Distributed Denial of Services attack present biggest challenges to the researchers in the field of network security. It has already taken a heavy toll on many Internet based service providers in the world. There have been significant amount of work to tackle such DoS attack with different kinds of detection methods. In this paper, we have studied four major DoS detection approaches that are being considered by the experts in this field. Perhaps it will be a hard task to discuss each and every previously published work in this field. That’s why we have kept the scope of the paper limited to just categorizing the existing approaches.
A s you are reading this chapter, you may have already noted the signiﬁcance of business agility and the roadmap presented using the balanced scorecard in our previous chapters. We have attempted to lay a sound foundation to deﬁne various business stakeholders and how these technology forces have helped align chief information ofﬁcers (CIOs), chief marketing ofﬁcers (CMOs), and others in the current turbulent business environment. We have also put our best efforts into deﬁning the Business Agility Readiness Roadmap with clear guideposts to help you make rapid progress in your business process optimization. We have made an effort to highlight cases of success and failure that would help you to identify clearly the stage you are at in your business and to transform your business with the impact of these technology forces. You have noticed that it is impera- tive to transform your current business model and processes to create business agility to survive and thrive in business irrespective of what business sector you are in today. It was okay to run a business independently just a few years ago, but no longer. We introduced and talked about business ecosystems, a concept that brings all stakeholders such as employees, customers, and partners (distribu- tors, value-added resellers, systems integrators, independent consul- tants, investors, shareholders) to participate in your business to create business agility and to help your business succeed. Considering carefully, you would realize that all play a major role in your business, and none should be ignored or taken lightly.
Enterprises that move their IT to the cloud are likely to encounter challenges such as security, interoperability, and limits on their ability to tailor their ERP to their business processes. The cloud can be a revolutionary technology, especially for small start-ups, but the benefi ts wane for larger enterprises with more complex IT needs [ 10 ]. The cloud model can be truly disruptive if it can reduce the IT opera- tional expenses of enterprises. Traditional utility services provide the same resource to all consumers. Perhaps the biggest difference between the cloudcomputing ser- vice and the traditional utility service models lies in the degree to which the cloud services are uniquely and dynamically confi gured for the needs of each application and class of users [ 12 ]. Cloudcomputing services are built from a common set of building blocks, equivalent to electricity provider turbines, transformers, and distri- bution cables. Cloudcomputing does, however, differ from traditional utilities in several critical respects. Cloud providers compete aggressively with differentiated service offerings, service levels, and technologies. Because traditional ERP is installed on your servers and you actually own the software, you can do with it as you please. You may decide to customize it, integrate it to other software, etc. Although any ERP software will allow you to confi gure and set up the software the way you would like, “Software as a Service” or “SaaS” is generally less fl exible than the traditional ERP in that you can’t completely customize or rewrite the soft- ware. Conversely, since SaaS can’t be customized, it reduces some of the technical diffi culties associated with changing the software. Cloud services can be com- pletely customized to the needs of the largest commercial users. Consequently, we have often referred to cloudcomputing as an “enhanced utility” [ 12 ]. Table 9.2 [ 5 ] shows the E-skills study for information and communications technology (ICT) practitioners conducted by the Danish Technology Institute [ 5 ] that describes the
Using an IaaS cloud you can create the virtual machine without owning any of the virtual- ization software yourself. Instead, you can access the tools for creating and managing the virtual machine via a web portal. You do not even need the install image of the operating system; you can use a virtual machine image that someone else created previously. (Of course, that someone else probably has a lot of experience in creating virtual machine images, and the image most likely went through a quality process before it was added to the image catalog.) You might not even have to install any software on the virtual machine or make customizations yourself; some- one else might have already created something you can leverage. You also do not need to own any of the compute resources to run the virtual machine yourself: Everything is inside a cloud data center. You can access the virtual machine using secure shell or a remote graphical user interface tool, such as Virtual Network Computing (VNC) or Windows ® Remote Desktop. When you are
The next layer within ITaaS is Platform as a Service, or PaaS. At the PaaS level, what the service providers offer is packaged IT capability, or some logical resources, such as databases, ﬁle systems, and application operating environment. Currently, actual cases in the industry include Rational Developer Cloud of IBM, Azure of Microsoft and AppEngine of Google. At this level, two core technolo- gies are involved. The ﬁrst is software development, testing and running based on cloud. PaaS service is software developer-oriented. It used to be a huge difﬁculty for developers to write programs via network in a distributed computing environ- ment, and now due to the improvement of network bandwidth, two technologies can solve this problem: the ﬁrst is online development tools. Developers can directly complete remote development and application through browser and remote console (development tools run in the console) technologies without local installation of development tools. Another is integration technology of local development tools and cloudcomputing, which means to deploy the developed application directly into cloudcomputing environment through local development tools. The second core technology is large-scale distributed application operating environment. It refers to scalable application middleware, database and ﬁle system built with a large amount of servers. This application operating environment enables appli- cation to make full use of abundant computing and storage resource in cloudcomputing center to achieve full extension, go beyond the resource limitation of single physical hardware, and meet the access requirements of millions of Internet users.
There’s growing sentiment among many cloud experts that ultimately hybrid adoption will be most ad- vantageous for many organizations. Warrilow says “for some time Gartner has advised that hybrid is the most likely scenario for most organiza- tions.” Staten agrees with the notion for two reasons. First, “some appli- cations and data sets simply aren’t a good fit with the cloud,” he says. This might be due to application architec- ture, degree of business risk (real or perceived), and cost, he says. Second, rather than making a cloud-or-no- cloud decision, “it’s more practical and effective to leverage the cloud for what makes the most sense and other deployment options where they make the most sense,” he says. In terms of strategy, Staten recommends regularly analyzing deployment decisions. “As cloud services mature, their applica- bility increases,” he says.
This clean and flexible architecture offered by SDN is extremely appealing for managing networks in a cloud environment. For example, VLAN technology , used in many cloud systems to keep multiple tenants isolated from each other, requires reconfiguration of network hardware every time a VM is instantiated or shutdown. Manual configuration by network administrators logging in to every affected switch is impractical in a very dynamic cloud environment. Automation requires understanding well command-line/web interfaces exposed by vendors and writing programs/scripts to parse such interfaces, which are different for each vendor and can change after a firmware upgrade. An open and standardized Northbound interface illustrated in Fig. 3 will significantly simplify the integration of network functions in cloud middleware: (1) the cloud middleware consults its database to check which VMs (VM1, VM2, and VM3) belong to a particular tenant (Tenant_A), and where those VMs are running (physical host and/or SDN switch that each VM is connected); (2) the cloud middleware invokes a SDN Northbound API to create a VLAN (VLAN_A) and connect the tenant’s VM on the new VLAN; (3) the SDN controller computes the necessary Southbound instructions and contacts the affected SDN switches. Moreover, using SDN mechanisms it would be possible to implement VLAN-like functionality without the 4096 ID limit of IEEE 802.1Q standard: for example, isolation can be enabled by allowing communication only among media access control (MAC) addresses of a particular tenant (this would entail the SDN controller to compute rules based on MAC addresses to be placed on switches).
Nearly equal in significance level are the rest of the challenges cited by channel firms making the move to cloud, with most of those hurdles centered on financial decisions. Initial start up costs, for example, can be minimal or quite large, depending on whether or not they involved building a data center to provide cloud services. Interestingly, the largest channel firms cited this as a major challenge, though they are most likely to have the deeper pockets needed to outfit a new data center if they don’t already have one in existence. Meantime, cash flow and other financial considerations ranked highest among channel firms (63%) involved in all four types of cloud business models outlined in this study. This suggests that the level of commitment they have made to cloud has complicated financial fundamentals; one example would be the effects of a decreased reliance on legacy streams of revenue, which in the short-‐term could create cash flow concerns as they ramp cloud sales.