• No results found

Communication Infrastructures for Cloud Computing

N/A
N/A
Protected

Academic year: 2021

Share "Communication Infrastructures for Cloud Computing"

Copied!
14
0
0

Loading.... (view fulltext now)

Full text

(1)

Communication

Infrastructures for Cloud

Computing

Hussein T. Mouftah

University of Ottawa, Canada

Burak Kantarci

University of Ottawa, Canada

A volume in the Advances in Systems

Analysis, Software Engineering, and High

Performance Computing (ASASEHPC)

Book Series

I Information Science I

REFERENCE

(2)

Table of Contents

Preface xviii

Section 1

Core Networks and Cloud Computing Chapter 1

Migration Path Towards Cloud-Aware Core Networks 1

Luis M. Contreras, Telefonica I+D, Spain

Victor Lopez, Telefonica I+D, Spain

Oscar Gonzalez de Dios, Telefonica I+D, Spain

Felipe Jimenez, Telefonica I+D, Spain

Juan Rodriguez, Telefonica I+D, Spain

Juan Pedro Fernandez-Palacios, Telefonica I+D, Spain

Chapter 2

Location and Provisioning Problems in Cloud Computing Networks 22

Federico Larumbe, Polytechnique Montreal, Canada

Brunilde Sansd, Polytechnique de Montreal, Canada

Chapter 3

The Cloud Inside the Network: A Virtualization Approach to Resource Allocation 46

Joao Soares, University of Aveiro, Portugal & Portugal Telecom Inovagao, Portugal

Romeu Monteiro, University of Aveiro, Portugal

Marcio Melo, University of Aveiro, Portugal & Portugal Telecom Inovagao, Portugal

Susana Sargento, University of Aveiro, Portugal & Instituto de Telecimunicaqdes, Portugal

Jorge Carapinha, Portugal Telecom Inovagao, Portugal

Chapter 4

Dimensioning Resilient Optical Grid/Cloud Networks 73

Chris Develder, Ghent University, Belgium

Massimo Tornatore, Politecnico di Milano, Italy

M. Farhan Habib, University of California, Davis, USA

Brigitte Jaumard, Concordia University, Canada

(3)

Chapter 5

Design and Implementation of Optical Cloud Networks: Promises and Challenges 107

Walid Abdallah, University of Carthage, Tunisia

Noureddine Boudriga, University of Carthage, Tunisia

Section 2

Wired/Wireless Access Networks and Cloud Computing Chapter 6

Communication Infrastructures in Access Networks 136

Syed Ali Haider, University of North Carolina at Charlotte, USA & National University of Science

and Technology, Pakistan

M. Yasin Akhtar Raja, University of North Carolina at Charlotte, USA

Khurram Kazi, New York Institute of Technology, USA

Chapter 7

Cloud Radio Access Networks 163

Jordan Melzer, TELUS Communications, Canada

Chapter 8

Accelerating Mobile-Cloud Computing: A Survey 175

Tolga Soyata, University of Rochester, USA

He Ba, University of Rochester, USA

Wendi Heinzelman, University of Rochester, USA

Minseok Kwon, Rochester Institute of Technology, USA

Jiye Shi, UCBPharma, UK

Section 3

Engineering of Cloud Data Centers Chapter 9

Performance Evaluation of Cloud Data Centers with Batch Task Arrivals 199

Hamzeh Khazaei, Ryerson University, Canada

Jelena Misic, Ryerson University, Canada

Vojislav B. Misic, Ryerson University, Canada

Chapter 10

Energy-Efficient Optical Interconnects in Cloud Computing Infrastructures 224

Christoforos Kachris, Athens Information Technology, Greece

Ioannis Tomkos, Athens Information Technology, Greece

(4)

Chapter 11

Energy-Efficiency in Cloud Data Centers 241

Burak Kantarci, University of Ottawa, Canada

Hussein T. Mouftah, University of Ottawa, Canada

Chapter 12

Carrier-Grade Distributed Cloud Computing: Demands, Challenges, Designs, and Future

Perspectives 264

Dapeng Wang, Alcatel-Lucent, China

Jinsong Wu, Alcatel-Lucent, China

Section 4

Engery-Efficiency in Cloud Communications Chapter 13

Energy-Efficiency in a Cloud Computing Backbone 283

Burak Kantarci, University of Ottawa, Canada

Hussein T. Mouftah, University of Ottawa, Canada

Chapter 14

Towards Energy Efficiency for Cloud Computing Services 306

Daniele Tafani, Dublin City University, Ireland

Burak Kantarci, University of Ottawa, Canada

Hussein T. Mouftah, University of Ottawa, Canada

Conor McArdle, Dublin City University, Ireland

Liam P. Barry, Dublin City University, Ireland

Chapter 15

Towards Energy Sustainability in Federated and Interoperable Clouds 329

Antonio Celesti, University of Messina, Italy

Antonio Puliafito, University of Messina, Italy

Francesco Tusa, University of Messina, Italy

Massimo Villari, University of Messina, Italy

Chapter 16

Energy Efficient Content Distribution 351

Taisir El-Gorashi, University of Leeds, UK

Ahmed Lawey, University of Leeds, UK

Xiaowen Dong, Huawei Technologies Co., Ltd, China

(5)

Section 5

Applications and Security Chapter 17

Virtual Machine Migration in Cloud Computing Environments: Benefits, Challenges,

and Approaches 383

Raouf Boutaba, University of Waterloo, Canada

Qi Zhang, University of Waterloo, Canada

Mohamed Faten Zhani, University of Waterloo, Canada

Chapter 18

Communication Aspects of Resource Management in Hybrid Clouds 409

Luiz F. Bittencourt, University of Campinas (UNICAMP), Brazil

Edmundo R. M. Madeira, University of Campinas (UNICAMP), Brazil

Nelson L. S. da Fonseca, University of Campinas (UNICAMP), Brazil

Chapter 19

Scalability and Performance Management of Internet Applications in the Cloud 434

Wesam Dawoud, University of Potsdam, Germany

Ibrahim Takouna, University of Potsdam, Germany

Christoph Meinel, University of Potsdam, Germany

Chapter 20

Cloud Standards: Security and Interoperability Issues 465

Fabio Bracci, University of Bologna, Italy

Antonio Corradi, University of Bologna, Italy

Luca Foschini, University of Bologna, Italy

Compilation of References 496 About the Contributors 541

Index 556

(6)

Detailed Table of Contents

Preface xviii

Section 1

Core Networks and Cloud Computing Chapter 1

Migration Path Towards Cloud-Aware Core Networks 1

Luis M. Contreras, Telefonica I+D, Spain

Victor Lopez, Telefonica I+D, Spain

Oscar Gonzalez de Dios, Telefonica I+D, Spain

Felipe Jimenez, Telefonica I+D, Spain

Juan Rodriguez, Telefonica I+D, Spain

Juan Pedro Fernandez-Palacios, Telefonica I+D, Spain

New services like Cloud Computing and Content Distribution Networks are changing telecom operator infrastructure. The creation of on-demand virtual machines or new services in the cloud reduces the utilization of resources among users but changes traditional static network provisioning. This chapter presents network architecture to deal with this new scenario called "Cloud-Aware Core Network." A Cloud-Aware Core Network can request on-demand connectivity so the network is configured based on the changing demands. Secondly, the network has to dynamically control the network resources and to take into account cloud information in the network configuration process. The Cloud-Aware Core Network is based on an elastic data and control plane, which can interact with multiple network tech­ nologies and cloud services.

Chapter 2

Location and Provisioning Problems in Cloud Computing Networks 22

Federico Larumbe, Polytechnique Montreal, Canada

Brunilde Sanso, Polytechnique Montreal, Canada

This chapter addresses a set of optimization problems that arise in cloud computing regarding the location and resource allocation of the cloud computing entities: the data centers, servers, software components, and virtual machines. The first problem is the location of new data centers and the selection of current ones since those decisions have a major impact on the network efficiency, energy consumption, Capital Expenditures (CAPEX), Operational Expenditures (OPEX), and pollution. The chapter also addresses the Virtual Machine Placement Problem: which server should host which virtual machine. The number of servers used, the cost, and energy consumption depend strongly on those decisions. Network traffic between VMs and users, and between VMs themselves, is also an important factor in the Virtual Machine Placement Problem. The third problem presented in this chapter is the dynamic provisioning of VMs

(7)

to clusters, or auto scaling, to minimize the cost and energy consumption while satisfying the Service Level Agreements (SLAs). This important feature of cloud computing requires predictive models that precisely anticipate workload dimensions. For each problem, the authors describe and analyze models that have been proposed in the literature and in the industry, explain advantages and disadvantages, and present challenging future research directions.

Chapter 3

The Cloud Inside the Network: A Virtualization Approach to Resource Allocation 46

Joao Soares, University of Aveiro, Portugal & Portugal Telecom Inovagao, Portugal

Romeu Monteiro, University of Aveiro, Portugal

Marcio Melo, University of Aveiro, Portugal & Portugal Telecom Inovagao, Portugal

Susana Sargento, University of Aveiro, Portugal & Instituto de Telecimunicagdes, Portugal

Jorge Carapinha, Portugal Telecom Inovagao, Portugal

The access infrastructure to the cloud is usually a major drawback that limits the uptake of cloud services. Attention has turned to rethinking a new architectural deployment of the overall cloud service delivery. In this chapter, the authors argue that it is not sufficient to integrate the cloud domain with the operator's network domain based on the current models. They envision a full integration of cloud and network, where cloud resources are no longer confined to a data center but are spread throughout the network and owned by the network operator. In such an environment, challenges arise at different levels, such as in resource management, where both cloud and network resources need to be managed in an integrated approach. The authors particularly address the resource allocation problem through joint virtualization of network and cloud resources by studying and comparing both Integer Linear Programming formula­ tion of the problem and a heuristic algorithm.

Chapter 4

Dimensioning Resilient Optical Grid/Cloud Networks 73

Chris Develder, Ghent University, Belgium

Massimo Tornatore, Politecnico di Milano, Italy

M. Farhan Habib, University of California, Davis, USA

Brigitte Jaumard, Concordia University, Canada

Optical networks play a crucial role in the provisioning of grid and cloud computing services. Their high bandwidth and low latency characteristics effectively enable universal users access to computational and storage resources that thus can be fully exploited without limiting performance penalties. Given the rising importance of such cloud/grid services hosted in (remote) data centers, the various users (ranging from academics, over enterprises, to non-professional consumers) are increasingly dependent on the network connecting these data centers that must be designed to ensure maximal service avail­ ability, i.e., minimizing interruptions. In this chapter, the authors outline the challenges encompassing the design, i.e., dimensioning of large-scale backbone (optical) networks interconnecting data centers. This amounts to extensions of the classical Routing and Wavelength Assignment (RWA) algorithms to so-called anycast RWA but also pertains to jointly dimensioning not just the network but also the data center resources (i.e., servers). The authors specifically focus on resiliency, given the criticality of the grid/cloud infrastructure in today's businesses, and, for highly critical services, they also include specific design approaches to achieve disaster resiliency.

(8)

Chapter 5

Design and Implementation of Optical Cloud Networks: Promises and Challenges 107

Walid Abdallah, University of Carthage, Tunisia

Noureddine Boudriga, University of Carthage, Tunisia

Cloud applications have witnessed significant increase in their development and deployment. This has been driven by the low cost and high performances that can offer cloud paradigm for enterprises to implement innovative services. However, cloud services are constrained by the available transmission rate and the amount of data volume transfers provided by the current networking technologies. Opti­ cal networks can play a key role in deploying clouds with enhanced performances, thanks to the high bandwidth and the very low latency provided by optical transmission. Nevertheless, the implementation of optical cloud networks faces many challenges and obstacles, such as the user-driven service nature of cloud applications, resource virtualization, and service abstraction and control. This chapter addresses the design and the implementation of optical cloud networks. Therefore, different issues related to the integration of cloud platform in the optical networking infrastructure are described. Then, current progress achieved to overcome these challenges is presented. Finally, some open issues and research opportunities are discussed.

Section 2

Wired/Wireless Access Networks and Cloud Computing Chapter 6

Communication Infrastructures in Access Networks 136

SyedAli Haider, University of North Carolina at Charlotte, USA & National University of Science

and Technology, Pakistan

M. Yasin Akhtar Raja, University of North Carolina at Charlotte, USA

Khurram Kazi, New York Institute of Technology, USA

Access networks are usually termed "last-mile/first-mile" networks since they connect the end user with the metro-edge network (or the exchange). This connectivity is often at data rates that are significantly slower than the data rates available at metro and core networks. Metro networks span large cities and core networks connect cities or bigger regions together by forming a backbone network on which traf­ fic from an entire city is transported. With the industry achieving up to 400 Gbps of data rates at core networks (and increasing those rates [Reading, 2013]), it is critical to have high-speed access networks that can cope with the tremendous bandwidth opportunity and not act as a bottleneck. The opportunity lies in enabling services that can be of benefit to the consumers as well as large organizations. For instance, moving institutional/personal data to the cloud will require a high-speed access network that can overcome delays incurred during upload and download of information. Cloud-based services, such as computing and storage services are further enhanced with the availability of such high-speed access networks. Access networks have evolved over time and the industry is constantly looking for ways to improve their capacity. Therefore, an understanding of the fundamental technologies involved in wired and wireless access networks will help the reader appreciate the full potential of the cloud and cloud access. Against the same backdrop, this chapter aims at providing an understanding of the evolution of access technologies that enable the tremendous mobility potential of cloud-based services in the con­ temporary cloud paradigm.

(9)

Chapter 7

Cloud Radio Access Networks

Jordan Melzer, TELJJS Communications, Canada

163

Radio virtualization and cloud signal processing are new approaches to building cellular Radio Access Networks (RAN) that are starting to be deployed within the cellular industry. For cellular operators, Cloud RAN architectures that centrally define or decode transmissions, placing most of the base-station software stack within a data-centre, promise improvements in flexibility and performance. The expected benefits range from standard cloud economies—statistical reductions in total processing, energy efficiency, cost reductions, simplified maintenance—to dramatic changes in the functionality of the radio network, such as simplified network sharing, capacity increases towards theoretical limits, and software defined radio inspired air interface flexibility. Because cellular networks have, in addition to complex protocols, extremely sensitive timing constraints and often high data-rates, the design challenges are formidable. This chapter presents the state of the art, hybrid alternatives, and directions for making Cloud Radio Access Networks more widely deployable.

Chapter 8

Accelerating Mobile-Cloud Computing: A Survey 175

Tolga Soyata, University of Rochester, USA

He Ba, University of Rochester, USA

Wendi Heinzelman, University of Rochester, USA

Minseok Kwon, Rochester Institute of Technology, USA

Jiye Shi, UCB Pharma, UK

With the recent advances in cloud computing and the capabilities of mobile devices, the state-of-the-art of mobile computing is at an inflection point, where compute-intensive applications can now run on today's mobile devices with limited computational capabilities. This is achieved by using the com­ munications capabilities of mobile devices to establish high-speed connections to vast computational resources located in the cloud. While the execution scheme based on this mobile-cloud collaboration opens the door to many applications that can tolerate response times on the order of seconds and minutes, it proves to be an inadequate platform for running applications demanding real-time response within a fraction of a second. In this chapter, the authors describe the state-of-the-art in mobile-cloud computing as well as the challenges faced by traditional approaches in terms of their latency and energy efficiency. They also introduce the use of cloudlets as an approach for extending the utility of mobile-cloud com­ puting by providing compute and storage resources accessible at the edge of the network, both for end processing of applications as well as for managing the distribution of applications to other distributed compute resources.

(10)

Section 3

Engineering of Cloud Data Centers Chapter 9

Performance Evaluation of Cloud Data Centers with Batch Task Arrivals 199

Hamzeh Khazaei, Ryerson University, Canada

Jelena Misic, Ryerson University, Canada

Vojislav B. Misic, Ryerson University, Canada

Accurate performance evaluation of cloud computing resources is a necessary prerequisite for ensuring that Quality of Service (QoS) parameters remain within agreed limits. In this chapter, the authors con­ sider cloud centers with Poisson arrivals of batch task requests under total rejection policy; task service times are assumed to follow a general distribution. They describe a new approximate analytical model for performance evaluation of such systems and show that important performance indicators such as mean request response time, waiting time in the queue, queue length, blocking probability, probability of immediate service, and probability distribution of the number of tasks in the system can be obtained in a wide range of input parameters.

Chapter 10

Energy-Efficient Optical Interconnects in Cloud Computing Infrastructures 224

Christoforos Kachris, Athens Information Technology, Greece

Ioannis Tomkos, Athens Information Technology, Greece

This chapter discusses the rise of optical interconnection networks in cloud computing infrastructures as a novel alternative to current networks based on commodity switches. Optical interconnects can sig­ nificantly reduce the power consumption and meet the future network traffic requirements. Additionally, this chapter presents some of the most recent and promising optical interconnects architectures for high performance data centers that have appeared recently in the research literature. Furthermore, it presents a qualitative categorization of these schemes based on their main features such as performance, connec­ tivity, and scalability, and discusses how these architectures could provide green cloud infrastructures with reduced power consumption. Finally, the chapter presents a case study of an optical interconnection network that is based on high-bandwidth optical OFDM links and shows the reduction of the energy consumption that it can achieve in a typical data center.

Chapter 11

Energy-Efficiency in Cloud Data Centers 241

Burak Kantarci, University of Ottawa, Canada

Hussein T. Mouftah, University of Ottawa, Canada

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.

(11)

Chapter 12

Carrier-Grade Distributed Cloud Computing: Demands, Challenges, Designs, and Future

Perspectives 264

Dapeng Wang, Alcatel-Lucent, China

Jinsong Wu, Alcatel-Lucent, China

This chapter discusses and surveys the concepts, demands, requirements, solutions, opportunities, chal­ lenges, and future perspectives and potential of Carrier Grade Cloud Computing (CGCC). This chapter also introduces a carrier grade distributed cloud computing architecture and discusses the benefits and advantages of carrier grade distributed cloud computing. Unlike independent cloud service providers, telecommunication operators may integrate their conventional communications networking capabilities with the new cloud infrastructure services to provide inexpensive and high quality cloud services together with their deep understandings of, and strong relationships with, individual and enterprise customers. The relevant design requirements and challenges may include the performance, scalability, service-level agreement management, security, network optimization, and unified management. The relevant key issues in CGCC designs may include cost effective hardware and software configurations, distributed infrastructure deployment models, and operation processes

Section 4

Engery-Efficiency in Cloud Communications Chapter 13

Energy-Efficiency in a Cloud Computing Backbone 283

Burak Kantarci, University of Ottawa, Canada

Hussein T. Mouftah, University of Ottawa, Canada

Cloud computing combines the advantages of several computing paradigms and introduces ubiquity in the provisioning of services such as software, platform, and infrastructure. Data centers, as the main hosts of cloud computing services, accommodate thousands of high performance servers and high capacity storage units. Offloading the local resources increases the energy consumption of the transport network and the data centers although it is advantageous in terms of energy consumption of the end hosts. This chapter presents a detailed survey of the existing mechanisms that aim at designing the Internet back­ bone with data centers and the objective of energy-efficient delivery of the cloud services. The survey is followed by a case study where Mixed Integer Linear Programming (MILP)-based provisioning models and heuristics are used to guarantee either minimum delayed or maximum power saving cloud services where high performance data centers are assumed to be located at the core nodes of an IP-over-WDM network. The chapter is concluded by summarizing the surveyed schemes with a taxonomy including the cons and pros. The summary is followed by a discussion focusing on the research challenges and opportunities.

Chapter 14

Towards Energy Efficiency for Cloud Computing Services 306

Daniele Tafani, Dublin City University, Ireland

Burak Kantarci, University of Ottawa, Canada

Hussein T. Mouftah, University of Ottawa, Canada

Conor McArdle, Dublin City University, Ireland

Liam P. Barry, Dublin City University, Ireland

Over the past decade, the increasing complexity of data-intensive cloud computing services along with the exponential growth of their demands in terms of computational resources and communication band­ width presented significant challenges to be addressed by the scientific research community. Relevant

(12)

concerns have specifically arisen for the massive amount of energy necessary for operating, connecting, and maintaining the thousands of data centres supporting cloud computing services, as well as for their drastic impact on the environment in terms of increased carbon footprint. This chapter provides a survey of the most popular energy-conservation and "green" technologies that can be applied at data centre and network level in order to overcome these issues. After introducing the reader to the general problem of energy consumption in cloud computing services, the authors illustrate the state-of-the-art strategies for the development of energy-efficient data centres; specifically, they discuss principles and best practices for energy-efficient data centre design focusing on hardware, power supply specifications, and cooling infrastructure. The authors further consider the problem from the perspective of the network energy consumption, analysing several approaches achieving power efficiency for access, and core networks. Additionally, they provide an insight to recent development in energy-efficient virtual machine place­ ment and dynamic load balancing. Finally, the authors conclude the chapter by providing the reader with a novel research work for the establishment of energy-efficient lightpaths in computational grids.

Chapter 15

Towards Energy Sustainability in Federated and Interoperable Clouds 329

Antonio Celesti, University of Messina, Italy

Antonio Puliafito, University of Messina, Italy

Francesco Tusa, University of Messina, Italy

Massimo Villari, University of Messina, Italy

Cloud federation is paving the way toward new business scenarios in which it is possible to enforce more flexible energy management strategies than in the past. Considering independent cloud providers, each one is exclusively bound to the specific energy supplier powering its datacenter. The situation radically changes if we consider a federation of cloud providers powered by both a conventional energy supplier and a renewable energy generator. In such a context, the opportune relocation of computational workload among providers can lead to a global energy sustainability policy for the whole federation. In this work, the authors investigate the advantages and issues for the achievement of such a sustainable environment.

Chapter 16

Energy Efficient Content Distribution 351

Taisir El-Gorashi, University of Leeds, UK

Ahmed Lawey, University of Leeds, UK

Xiaowen Dong, Huawei Technologies Co., Ltd, China

Jaafar Elmirghani, University of Leeds, UK& King Abdulaziz University, Saudi Arabia

In this chapter, the authors investigate the power consumption associated with content distribution net­ works. They study, through Mixed Integer Linear Programming (MILP) models and simulations, the optimization of data centre locations in a Client/Server (C/S) system over an IP over WDM network so as to minimize the network power consumption. The authors investigate the impact of the IP over WDM routing approach, traffic profi le, and number of data centres. They also investigate how to replicate content of different popularity into multiple data centres and develop a novel routing algorithm, Energy-Delay Optimal Routing (EDOR), to minimize the power consumption of the network under replication while maintaining QoS. Furthermore, they investigate the energy efficiency of BitTorrent, the most popular Peer-to-Peer (P2P) content distribution application, and compare ittothe C/S system. Theauthors develop an MILP model to minimize the power consumption of BitTorrent over IP over WDM networks while maintaining its performance. The model results reveal that peers co-location awareness helps reduce BitTorrent cross traffic and consequently reduces the power consumption at the network side. For a real time implementation, they develop a simple heuristic based on the model insights.

(13)

Section 5

Applications and Security Chapter 17

Virtual Machine Migration in Cloud Computing Environments: Benefits, Challenges,

and Approaches 383

Raouf Boutaba, University of Waterloo, Canada

Qi Zhang, University of Waterloo, Canada

Mohamed Faten Zhani, University of Waterloo, Canada

Recent developments in virtualization and communication technologies have transformed the way data centers are designed and operated by providing new tools for better sharing and control of data center resources. In particular, Virtual Machine (VM) migration is a powerful management technique that gives data center operators the ability to adapt the placement of VMs in order to better satisfy performance objectives, improve resource utilization and communication locality, mitigate performance hotspots, achieve fault tolerance, reduce energy consumption, and facilitate system maintenance activities. Despite these potential benefits, VM migration also poses new requirements on the design of the underlying communication infrastructure, such as addressing and bandwidth requirements to support VM mobility. Furthermore, devising efficient VM migration schemes isalsoachallenging problem, as it not only requires weighing the benefits of VM migration, but also considering migration costs, including communication cost, service disruption, and management overhead. This chapter provides an overview of VM migra­ tion benefits and techniques and discusses its related research challenges in data center environments. Specifically, the authors first provide an overview of VM migration technologies used in production environments as well as the necessary virtualization and communication technologies designed to sup­ port VM migration. Second, they describe usage scenarios ofVM migration, highlighting its benefits as well as incurred costs. Next, the authors provide a literature survey of representative migration-based resource management schemes. Finally, they outline some of the key research directions pertaining to VM migration and draw conclusions.

Chapter 18

Communication Aspects of Resource Management in Hybrid Clouds 409

Luiz F. Bittencourt, University of Campinas (UNICAMP), Brazil

Edmundo R. M. Madeira, University of Campinas (UNICAMP), Brazil

Nelson L. S. da Fonseca, University of Campinas (UNICAMP), Brazil

Organizations owning a datacenter and leasing resources from public clouds need to efficiently manage this heterogeneous infrastructure. In order to do that, automatic management of processing, storage, and networking is desirable to support the use of both private and public cloud resources at the same time, composing the so-called hybrid cloud. In this chapter, the authors introduce the hybrid cloud concept and several management components needed to manage this infrastructure. They depict the network as a fundamental component to provide quality of service, discussing its influence in the hybrid cloud management and resource allocation. Moreover, the authors present the uncertainty in the network channels as a problem to be tackled to avoid application delays and unexpected costs from the leasing of public cloud resources. Challenging issues in the hybrid cloud management is the last topic of this chapter before the concluding remarks.

(14)

Chapter 19

Scalability and Performance Management of Internet Applications in the Cloud 434

Wesam Dawoud, University of Potsdam, Germany

Ibrahim Takouna, University of Potsdam, Germany

Christoph Meinel, University of Potsdam, Germany

Elasticity and on-demand are significant characteristics that attract many customers to host their Internet applications in the cloud. They allow quick reacting to changing application needs by adding or releas­ ing resources responding to the actual rather than to the projected demand. Nevertheless, neglecting the overhead of acquiring resources, which mainly is attributed to networking overhead, can result in periods of under-provisioning, leading to degrading the application performance. In this chapter, the authors study the possibility of mitigating the impact of resource provisioning overhead. They direct the study to an Infrastructure as a Service (IaaS) provisioning model where application scalability is the customer's responsibility. The research shows that understanding the application utilization models and a proper tuning of the scalability parameters can optimize the total cost and mitigate the impact of the overhead of acquiring resources on-demand.

Chapter 20

Cloud Standards: Security and Interoperability Issues

Fabio Bracci, University of Bologna, Italy

Antonio Corradi, University of Bologna, Italy

Luca Foschini, University of Bologna, Italy

465

Starting from the core assumption that only a deep and broad knowledge of existing efforts can pave the way to the publication of widely-accepted future Cloud standards, this chapter aims at putting together current trends and open issues in Cloud standardization to derive an original and holistic view of the existing proposals and specifications. In particular, among the several Cloud technical areas, the analysis focuses on two main aspects, namely, security and interoperability, because they are the ones mostly covered by ongoing standardization efforts and currently represent two of the main limiting factors for the diffusion and large adoption of Cloud. After an in-depth presentation of security and interoperability requirements and standardization issues, the authors overview general frameworks and initiatives in these two areas, and then they introduce and survey the main related standards; finally, the authors compare the surveyed standards and give future standardization directions for Cloud.

Compilation of References About the Contributors Index

.496 .541 556

References

Related documents

As a result of the interviews with the GCMs and the care recipients, we developed fi ve themes to assess the value of GCM services: (1) the overall role of the GCM, (2) the

The third tier is at the national level, consisting of a number of financial co-operatives and Credit Union Central of Canada, the national trade association for Canadian

Pharmacologic treatment of male stress urinary incontinence: a systematic review of the literature and levels of evidence. University of

VPAL Northern Territory Trip – During the reporting period planning has continued for a Northern Territory Trip for the 25 VPAL (young police legatees) cohort which will occur

National Conference on Technical Vocational Education, Training and Skills Development: A Roadmap for Empowerment (Dec. 2008): Ministry of Human Resource Development, Department

The summary resource report prepared by North Atlantic is based on a 43-101 Compliant Resource Report prepared by M. Holter, Consulting Professional Engineer,

Recommendation: Control over the University’s software should be improved by establishing procedures designed to ensure compliance with the State of Connecticut’s Property

The perception of a customer as the most im- portant value for the company and understanding their needs and wants enables their reflection in production of goods and services