• No results found

A B S T R A C T I. INTRODUCTION

N/A
N/A
Protected

Academic year: 2021

Share "A B S T R A C T I. INTRODUCTION"

Copied!
6
0
0

Loading.... (view fulltext now)

Full text

(1)

Review on Xen Hypervisor

Shikha R. Thakur*, R. M. Goudar

MIT Academy of Engineering, Alandi (D) University of Pune, Pune, India-412105

thakurshikha26@yahoo.com *, rmgoudar@comp.maepune.ac.in A B S T R A C T

In this era, Cloud Computing and virtualization are inseparable from each other. Virtualization increases efficiency, flexibility and scalability in cloud computing. Virtualization in cloud computing is possible due to different virtualization platform such as Kvm, UMLinux, VMware, VirtualBox, Xen. Xen is an open source virtualization tool for cloud computing that is widely used among cloud providers. Xen hypervisor provides two modes of virtualization Para-virtualized and hardware assisted virtualization. Xen hypervisor in Para-virtualized mode builds cloud platform. This paper gives the introduction of Xen hypervisor and survey of related work on improving the performance of Xen hypervisor.

Index Terms: Xen hypervisor, virtual machine monitor, network I/O virtualization, cloud computing, virtualization, para-virtualization, hardware assisted virtualization

I. INTRODUCTION

Cloud [1] is a virtualization of resources. Computing that enables accessing of virtualized resources and services needed to perform functions with dynamically user demands and changing needs is termed as Cloud Computing. Cloud computing comprises of three service, Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Virtualization is a key technology to implement infrastructure services. Virtualization is alike emulation in which system pretends to be one or more of the same system. Implementing virtualization provides flexibility, scalability and effectiveness to the cloud.

Virtualization in cloud could be implemented with the help of numerous virtualization tools such as Kvm, UMLinux, Vmware, VirtualBox, and Xen. Xen hypervisor, driver domain based model is an open source virtualization platform. Xen [2, 3] is a hypervisor providing services that allow multiple virtualized operating systems to execute on single computer hardware concurrently. Xen hypervisor provides a strong foundation of virtualization to cloud providers. Hypervisor is a software layer that creates runs and manages virtual machines (VMs). Hypervisor layer lies between physical and operating system. Hypervisor were first implemented for computing intensive application and not for the network intensive application.

(2)

Paper starts with an introduction in section I. Section II of this paper gives an architectural and working of Xen hypervisor considering inter-domain communication. Section III focuses on the related work that has been done for improving the I/O performance. Section IV concludes the topic.

II. XEN ARCHITECTURE

Xen [4] is a virtual machine monitor that provides a virtual environment in which a kernel can run. Hypervisor is a software layer that lies between OS and hardware layer. A Xen system consists of three components – Hypervisor, Kernel and user applications. Xen virtualization technology with Para-virtualized kernel provides performance close to native machine. In Para- Para-virtualized kernel, the privileged instructions are replaced with hypercalls [Fig 1]. Hypervisor architecture provides environment to run numerous guests with different operating system, this environment is called as domain [Fig 2]. Basically there are two types of domains, Domain 0 (Dom0) which is privileged guest that handles communication of all other guests, with the hardware. When Xen boots the first thing it loads is the Dom0. Other guest are unprivileged guests know as Domain U (DomU). These guests can access hardware only through Dom0.

Alike kernel, Xen require events for different operations like reading, writing, checking status, grant mechanism and memory access etc. For e.g. When the Dom0 makes data available in shared memory, an event is fired to tell guest kernel (DomU) that data is there inside memory. Xen events can occur as hardware or virtual interrupts. These events can be delivered via callback through channels. Xen analog is responsible for implementing event mechanism. For this to work, a guest kernel should register a callback to be used for delivering of an event. When events are delivered various flags are set to indicate which event is present. These delivering of events may be synchronous or asynchronous.

System call

Figure 1. System calls in native and Para- virtualized system.

In driver domain model of Xen, two domains can transfer their data through shared memory – a memory whose contents are available to both the domains. This shared memory considered as pages and these pages can be identified by a grant reference. Grant reference is an integer which can be communicated between domains using XenStore- a file system of Xen; alike UNIX. Xen performs two inter-domain operations on memory pages- sharing and transferring. A page transfer is a coarse grained message passing mechanism.

(3)

divided into two halves; the top half resides in Dom0 Known as netback while the bottom half known as netfront which runs on the DomU.

Figure 2: Architecture of Xen hypervisor

Netfront performs various functions such as initializing a memory pages and exports it using grant mechanism, advertises the grant reference via Xenstore and receives request and write response in shared memory while netback is responsible for the functions like finding the bottom half of the shared memory page in Xenstore, mapping the page into its own address space., writes the request into shared memory page. And receives the response send by the netfront. Netfront and netback communicate with each other using bidirectional I/O ring buffers; one side for packet reception and another side for packet transmission. The I/O ring buffers implements publish- subscribe communication. It can be implemented on the top of two building blocks; grant tables and event channels. Grant table allows the use of ring mechanism; transferring data in bulk by sharing of memory so that both sides of drivers can access it. The purpose of the grant table in network I/O is to provide a fast and secure mechanism for guest domains (DomUs) to have an indirect access to the network devices through the driver domain. Grant table enables the driver domain to set up a DMA based data transfer directly to/from the system memory of a DomU rather than performing the DMA to/from driver domain’s memory. It can be used to either share or transfer pages between the DomU and Dom0. For example, the frontend of a split driver in DomU can notify the Xen hypervisor (via the gnttab_grant_foreign_access hypercall) that a memory page can be shared with the driver domain. The DomU then passes a grant table reference via the event channel to the driver domain, which directly copies data to/from the memory page of the DomU. Once the page access is complete, the DomU removes the grant reference (via the gnttab_end_foreign_access call).

(4)

domains can negatively impact the performance. An additional source of overhead can be the invocation of frequent hypercalls (equivalent of system calls for the hypervisor) in order to perform page sharing or transfers.

Figure 3: Split device driver model.

III. RELATED WORK

Xen is popular virtual machine monitor that is widely used among cloud providers. Since Xen exhibits poor network I/O performance, it is necessary to improve the throughput. Numerous researches have been carried out to overcome this problem. Here is the summarized description of related work that is done to improve the performance. This improvement can be done with by enhancing software as well as hardware implementations.

A. Software Enhancements

Xenloop [5] is a high speed bidirectional inter VM- channel kernel module. Xenloop enables communication between two guest Vms through this channel instead of the standard data path via Dom0. This module lies as a thin layer in the network protocol stack between the network layer and the data ink layer. There is Software Bridge that is used to determine the packet's destination address. The channel can be created between any pair of Vms in a single physical machine. A soft-state domain discovery mechanism is used to find such pairs and is stored to mapping table in Xenloop module. Xenloop improves bandwidth in a range of 1.55 to 6.19 over the netfront and netback system. Latency performance in Xenloop can be worse by a factor ranging from 1.2 to 4.6.

(5)

Packet aggregation [7, 10] is a mechanism that aggregates incoming packet into a container of fixed size. This container is placed in shared memory at once reducing incurring overheads in calling number of functions involving in a single packet transfer. Communication between driver domain and DomU involve the use of event channel and ring buffers. Packet aggregation algorithm is implemented within split drivers i.e. in netback as well as in netfront of driver domain and DomU respectively. This improves network I/O virtualization.

B. Software Enhancements

Virtual Interrupt Coalescing (VIC) [8] reduces the CPU cycles and improves I/O virtualization performance. Frontend Virtual Interrupt Coalescing (FVIC) controls the virtual interrupt by generating a periodic timer to poll the arriving packets in the shared ring. FVIC generates interrupt based on a periodic timer, and polls the shared ring to see if there are arrival packets or not. For small number of packets, with TCP; VIC gives 3 % increment in throughput and 12.6% in CPU utilization, while with UDP its performance is worst. And for large number of packets, in both TCP and UDP the throughput and CPU utilization is improved. Overall, VIC can reduce CPU utilization by 71%.

Virtual Receive Side Scaling (VRSS)[8] uses the hardware Receive Side Scaling (RSS) to forward different virtual interrupts to different virtual processors leads to dynamic load balancing of virtual interrupts among virtual processors. RSS is a hardware technology that supports for multiple queuing in hardware NIC. It balances all incoming packets across different hardware queues at connection level i.e. it supports for multi-core processing. The VRSS can achieve 2.61X throughput over the baseline.

VMDq [9] and its optimization, self – virtualized devices, VMM -bypass I/O [12], direct I/O, and SR-IOV[ 11]. In all of these, each VM access hardware for some period to reduce the hypervisor intervention to the bulk data path. Such enhancements lack in efficient VM replication and checkpoint as well as suffer from hardware limitations and dependencies.

IV. CONCLUSION

An architecture and working of Xen hypervisor is detailed in this paper giving brief idea of all components. Since, hypervisor exhibits poor virtualized network I/O performance, several researches has been done to overcome it. Here is summarized description of enhancements for the Xen hypervisor. From, related work we come to know that Xen performance still need to be improved in aspects of hardware and software to achieve best to reach the native machine performance.

V. REFERENCES

[1] Grant, A.B.; Eluwole, O.T. "Cloud resource management virtual machines competing for limited resources", ELMAR, 2013 55th International Symposium, On page(s): 269 - 274, Volume: Issue: , 25 27 Sept. 2013.

[2] Mukil Kesavan , Ada Gavrilovska , Karsten Schwan, Differential virtual time (DVT): rethinking I/O service differentiation for virtual machines, Proceedings of the 1st ACM symposium on Cloud computing, June 10-11, 2010.

(6)

[4] David Chisnall, “The Definitive Guide to the Xen Hypervisor,” 1st ed., USA, Pearson Education, Inc, 2007.

[5] J. Wang, K. Wright, and K. Gopalan, ”XenLoop: A Transparent High Performance Inter-vm Network LoopBack”, Proc. ACM Symp. High Performnce Parrallel and Distributed Computing (HPDC’08), 2008.

[6] X. Zhang, and Y. Dong, ”Optimizing Xen VMM based on Intel Virtualization technology”, Proc. International Conference on Computer Science and Software Engineering, 2008.

[7] M. Bourguiba, K. Haddadou, and G. Pujolle, ”Packet Aggregation Based Network I/O Virtualization for Cloud Computing”,Elsevier Computer Communications, Vol. 35, no. 3, pp 309-319,2012. [8] S. Gamage, A. Kangarlou, R. Kompella, and D. Xu, ”Opportunistic Flooding to Improve TCP

Transmit Performance in Virtualized Clouds”, Proc. ACM Symp. Cloud Computing (SOCC’ 11), 2011.

[9] Yaozu Dong, Dongxiao Xu, Yang Zhang, Guangdeng Liao, ”Optimizing Network I/O Virtualization with Efficient Interrupt Coalescing and Virtual Receive Side Scaling ”, Proc. IEEE International conference on cluster Computing, p.26-34, Sept. 26-30, 2011.

[10] GManel Bourguiba, Kamel Haddadou, Ines El Korbi, Guy Pujolle, "Improving Network I/O Virtualization for Cloud Computing," IEEE Transactions on Parallel and Distributed Systems, 25 Feb. 2013.

[11] Yaozu Dong, Xiaowei Yang, Xiaoyong Li, Jianhui Li, Kun Tian, Haibing Guan, “High performance network virtualization with SR-IOV,” High Performance Computer Architecture (HPCA), IEEE 16th International Symposium, 2010 , Page(s): 1 - 10 , 2010.

References

Related documents

Figures 2a (top) & 2b (bottom): Trophic State Index values for Seeley Lake (a) and Salmon Lake (b), based on summer chlorophyll a, Secchi depth, AHOD and spring turnover total

Clusters of editors and professors of philosophy, political theory, linguistics, and anthropology who proclaim their ‘European’ identity, New Right intellectuals are

The programs provide for actions that follow two main lines: the first emergency support with the distribution of winter equipment (mattresses, blankets, fuel,

Further- more, if we also consider that the three focal reserves repre- sented w51.3% of total coral reef habitat within all six reserves and assume that the three unsampled

Organizations typically use an incoherent strategy towards BI deployment, characterized by different departments or business units using different BI tools.. The decision is

To ascertain whether a breach in a contingency that was not explicitly provided for is moral or immoral under our definition, one needs to determine whether performance would or

Fran kfur t/ Of fenb ach, Ger many · May 3 -8, 20 15 Tramway Water Taxi North South Connections Connections East West Connections Offenbach Airport Connection Riverfront

Table 3b shows that when SENTIMENT is positive, monthly returns are 0.32 percent higher on profitable than unprofitable firms and 0.45 percent higher on payers than nonpayers..