Network Virtualization in the Data Center

Download (0)

Full text

(1)

Network Virtualization in the Data Center

Sam Whitlock

EB Group, I&C, EPFL

Abstract—Modern data centers are abstracted into different pools of resources: compute, storage, and network. This enables workloads to be instantiated and run in the most efficient location. Unfortunately, the inability to treat the network infras-tructure as a pool of resources limits the other pools to be treated as such. This paper provides an overview of three techniques that enable the abstraction of network resources as a pool: separation of the control and data planes, data-centric APIs for network controllers, and virtual overlay networks. The combination of these techniques enables the migration of enterprise workloads to the hosted service providers and increases resource utilization within the data center.

Index Terms—network virtualization, SDN, software-defined networking, virtual overlay networks, resource pools, multi-tenant, data center, datacenter

I. INTRODUCTION

Large data centers achieve a higher ratio of performance to cost over smaller enterprise installations. Purchasing and installing a large volume of uniform resources lowers the cost per unit. Other infrastructure, such as that for power and cooling, offers efficiencies that are not available in smaller enterprises [15].

The creation of resource pools is a resource management technique used in large data centers to improve resource Proposal submitted to committee: June 12th, 2014; Candi-dacy exam date: June 19th, 2014; CandiCandi-dacy exam committee: Prof. Willy Zwaenepoel, Dr. Edouard Bugnion, Prof. Katerina Argyraki, Prof. James Larus

This research plan has been approved:

Date:

————————————

Doctoral candidate: ————————————

(name and signature)

Thesis director:

————————————

(name and signature)

Thesis co-director: ————————————

(if applicable) (name and signature)

Doct. prog. director:————————————

(B. Falsafi) (signature)

EDIC-ru/05.05.2009

utilization. Operators create resource pools by grouping similar types of resources together. Pools enable applications to be agnostic to the physical location or capabilities of the infras-tructure. For example, a virtual machine can run correctly on any hypervisor of the same architecture because both entities communicate using a well-defined instruction set architecture. The ability to utilize any subset of a resource pool decreases resource fragmentation, which occurs if services are tied to specific locations in the data center. This flexibility of the resource pool abstraction increases data center utilization by allowing more jobs to be placed within a given set of physical resources.

Data centers virtualize compute and storage resources to construct resource pools [3]. Compute pools are created with the virtual machine abstraction due to the well-known abstraction of physical hardware (e.g., the x86 instruction set architecture). In certain cases, compute pools may be constructed at the application level; this is typically done with cluster schedulers, such as the Apache Mesos scheduler [6]. Storage pools virtualize a particular abstraction, such as a disk block or a file.

Data center networks typically are not virtualized resources; this limits the ability to treat networks as resource pools. The use of traditional network routing protocols (e.g., OSPF, IS-IS) in the data center prevents the separation of policy and mechanism. Physical network resources in contemporary data centers are only available in specific locations. Because of the lack of separation between policy and mechanism, these resources can only be modified through manual intervention by operators (e.g., physically altering the network or changing configuration on specifics switches).

The lack of network virtualization in data centers limits the ability of compute and storage resources to be managed as pools. Compute and storage pools use the network as the primary communication channel in data center. Although these pools may have the flexibility to partition resources among different workloads, these workloads often rely on various features or performance aspects or the network. For example, most enterprise workloads require a flat L2 addressing scheme for service discovery, whereas large analytics workloads may only require L3 connectivity [8]. Because the network is not a virtualized resource and the physical network is heterogeneous (i.e., the performance and capabilities are not uniform through-out the network), the compute and storage pools must place workloads in a location where the network requirements can be satisfied. This creates a disparity between overutilized and underutilized portions of the data center. To meet requirements despite the nonoptimal utilization of the resource pools, data centers overprovision the resources of all pools; this additional hardware makes data centers more expensive to build and more

(2)

difficult to maintain.

II. TRADITIONALNETWORKINGAPPROACHES

Traditional (i.e., non-virtualized) data center networks use several techniques to manage network resources. This section discusses the differences between the environment of the modern data center and the nascent Internet, explains two common techniques to manage data center networks with existing protocols, and describes the common failures of these methods.

A. Changes Since the DARPA Internet Project

The design philosophy of the Internet protocols is pri-marily focused on interconnecting heterogeneous networks. According to David Clark’s retrospective on the evolution of the TCP/IP protocol stack [5], this emphasis of creating protocol layers for interoperability between existing networks assigned an approximate ordering, based on priority, to a series of Internet Architecture goals (listed in section 3 of the Clark’s paper). Goals pertaining to this philosophy, such as survivability in the face of partial resource loss, received a higher priority over those that were not as important, such as cost effectiveness and resource accountability. The ordering of these goals gave rise to the choice of a best-effort service model over one of guaranteed service, as well as the choice of datagram routing over circuit switching.

Operators of modern data centers would assign a different priority to these goals, or perhaps a different set of goals alto-gether. For example, resource accountability is important when a data center that runs a public cloud charges clients based on resource usage. The contrast between the environment of the early Internet and the modern data center makes it difficult to directly apply traditional protocols in a data center setting. The following sections examine two examples of how this adaptation is made in data centers using traditional networking hardware.

B. Virtual LAN

Broadcast domains are convenient for applications, but negatively impact performance when the domain is too large (i.e., encompasses too many hosts, switches, and links). Cer-tain applications for data centers use a broadcast domain to perform runtime configuration, such as address resolution (e.g., the Address Resolution Protocol [14]) and service dis-covery (e.g., task schedulers). The learning switch protocol, which is run by Ethernet switches, must also broadcast a packet within a domain if it has not yet learned the location of the destination. Certain techniques can be used to mitigate the overload of traffic on a broadcast domain, such as having switch learning tables containing millions of entries to prevent broadcast learning [4]. However, broadcast traffic can easily overload data center networks if left unchecked.

Virtual LANs (VLANs) enable operators to slice a single large broadcast domain into multiple smaller domains. Hosts or first-hop switches insert one or more VLAN tags into a packet. Switches then apply a policy, specified at each switch

individually, to each packet based on a packet’s tags. Network administrators use this mechanism to slice a large broadcast domain by applying a policy that limits the broadcast of a packet with a certain tag to a subset of the switches ports.

The inefficiencies of a broadcast domain, no matter how it is partitioned, limit its use in data centers. The Ethernet standard used in modern data centers requires a Spanning Tree Protocol to be run in each switch to avoid loops; packets can only be routed along the tree. As a result, only a subset of the links are used. Even if multiple broadcast domains share the same physical network, the utilization is low on many links (depending on how well the spanning trees map onto the physical topology).

C. IP Address Segmentation and Hierarchical Topologies

To counter the limitations of managing a broadcast domain, data centers use a hierarchical topology with IP addressing. Operators arrange the physical network into a hierarchy, such as a Clos topology [2], and configure IP addresses to aggregate based on a hierarchy that corresponds to the underlying phys-ical hierarchy. Broadcast domains are limited to the bottom layer of the hierarchy.

The use of IP addresses limits the mobility of workloads because each workload’s physical location within the hierarchy is tied to its logical location by its IP address. Moving an application from one location to another in the hierarchy requires reassigning its IP address. This causes problems be-cause many enterprise services use IP addresses as identifiers; using an application-layer service, such as the Domain Name Service [11] or a custom name resolution service, adds scaling and complexity issues. This combined use of IP addresses as both logical and physical identifiers makes it difficult for applications to move from one environment to another (e.g., from the enterprise to a hosted service).

D. Common Shortcomings

The distribution of network state makes the network difficult to configure. Each switch contains a set of scripts that con-figure its state (i.e., how it routes packets). To make a change in the network, a network administrator must individually adjust these scripts for each switch. This box-by-box style of configuration is time-consuming for network operators. The protocols run by each switch are complex, which causes a high rate of operator-induced configuration errors [12]. This increases both the cost of data center maintenance and the time to reconfigure the network.

This “script scaling” problem is a symptom of the funda-mental problem of applying traditional networking to the data center: the control plane and the data plane are not separate. The data plane (i.e., the mechanism that forwards packets) is in the same location as the control plane, (i.e., the policy that specifies the configuration of the data plane). This separation makes it difficult and time-consuming to configure. Even a correctly configured control plane does not provide high resource utilization; the result of the various state distribution algorithms run by different protocols can prevent network paths and links from being used efficiently. This leads to

(3)

operators overprovisioning their network hardware to meet requirements, further adding to costs [7].

III. ETHANE: CONTROLPLANECENTRALIZATION

Ethane [4] separates the control plane from the data plane. It exposes a high-level configuration language for network configuration. The configuration is compiled to a software-based control plane program, which runs on a regular server in a data center. This software control plane interposes on every new network flow by configuring each switch to send the first packet from new flows to the control server. The control server then makes a decision based on the compiled policy and modifies the state of the data plane accordingly.

A. High-level Policy Language

Ethane’s configuration language operates on high-level names. A network administrator first declares entities and groups of entities based on logical names (e.g., “vm1” and “vm2”), not in terms of network addresses. These entities are either hosts (physical or virtual) or users. He or she uses these groups to match a packet to an action; for example, the operator can declare that all laptops may not accept incoming connections, or that a group of servers may only send packets to other servers in the same group.

Ethane translates high-level names into physical network locations by using existing external services to authenticate hosts and users. Ethane identifies known hosts (e.g., work-stations) by their pre-registered MAC addresses and relies on a captive web portal and an external authentication service (e.g., Stanford’s Kerberos server used in the paper’s imple-mentation) to identify the location of users and mobile hosts (e.g., laptops). Switches and other trusted elements in the network core authenticate using SSL with server- and client-side certificates. Ethane can be extended to support other authentication mechanisms.

B. Centralized Software Control Plane

Ethane separates the control plane from the data plane by moving the control plane into a software application on a control server (i.e., a controller). The high-level policy is compiled with other software libraries into an executable, which is then deployed on a server within the network. The separation of control plane and data plane is the widely-accepted definition of software-defined networking (SDN).

Each switch sends the first packet of every new flow to the controller for a policy decision. If a switch does not have an exact-match rule for a packet, it sends this packet to the controller. The controller makes a decision for the packet, based on the compiled policy, and updates the state of the data plane; this update involves sending new rules that match the packet’s headers exactly and have appropriate actions based on the policy.

C. Simplified Data Plane

Ethane’s data plane is simpler than the data planes in tradi-tional networks because it is separate from the control plane.

The data plane runs on software switches and existing, simple hardware switches. In both cases, the switch either processes packets based on rules in its tables or sends unknown packets to the controller for a decision. The only other responsibility of a switch is to send topology information to the controller (e.g., whether a link is up or down). As a result of this simple processing, the switch does not have to run complicated state distribution algorithms that are necessary in traditional networks. This control and data plane separation also allows the control policy to be easily updated; it only requires a software update in a single location (at the controller).

D. Outcomes

Casado et al. proved the viability of Ethane by deploying it in the Stanford computer science department’s network. The network was modest in comparison to modern data centers (e.g., an average of 120 active hosts within a 5 minute window), but the deployment demonstrated the viability of a software-defined network. Benchmarks in the paper suggest that software-defined networking is feasible in larger installa-tions. For example, a single controller is capable of processing 10,000 new network flows per second. Based on traces from large networks (e.g., the full Stanford campus network) a single controller could run the data plane for large networks.

E. Problems

The network will not function if the controller loses connec-tivity or crashes. The control plane is reactive: it only installs rules when new flows arrive. This fine-grained approach en-ables switches to be simpler, but it makes the controller a single point of failure. A loss of connectivity between each switch and the controller (e.g., from a crash) renders the network inoperable.

Although it is much easier to configure than a traditional network, the configuration of the Ethane control plane has many steps, none of which are easily automated. A network operator must first compile the high-level policy file into C++ handlers using a source-to-source compiler. The operator then compiles and link the binaries with Ethane’s library code, Finally, he or she deploys the binary on the controller. This is faster and less error-prone than updating the control plane in a traditional network, but it still requires operator intervention and cannot be done automatically be a management applica-tion.

IV. ONIX: DATA-CENTRICNETWORKAPI

Onix [9] builds upon Ethane by providing a data-centric API to configure the centralized control plane. This section discusses the Onix API, cluster capabilities, and state distri-bution mechanisms.

A. Data-centric API

Onix exposes the physical state of the network to client applications through an API; client applications use this API to read and configure all network state. Such applications are able to manage the network without operator intervention.

(4)

This interface allows applications to automatically provision network resources. Automatic provisioning reduces the time to reconfigure the network from days to seconds.

The API exposes the physical network infrastructure as a data structure called the Network Information Base (NIB). The NIB is a set of entities in a network (e.g., switches, links, ports, etc.) and a relationship between these entities. When an API client modifies an entity (e.g., deactivating a port), Onix translates each NIB modification into commands to alter the state of the network. For example, if a client adds a rule to a switch’s table in the NIB, Onix translates this into an OpenFlow [10] command to modify the corresponding physical switch. This enables applications using the API to modify the NIB while being agnostic to the specific protocols needed to modify the physical network elements.

B. Platform-provided State Distribution

Onix nodes are grouped into clusters to improve scaling and reliability. Each node in the cluster is assigned a partition of the physical topology to manage (e.g., only a specific node may configure a given switch via OpenFlow). Every node contains a replica of the NIB, and uses the state of its replica to determine policy and configuration.

Onix provides storage engines to keep the NIB instances in each replica consistent. Each Onix replica translates NIB modifications into queries to the storage engines. Onix pro-vides several different engines because different NIB data have different requirements for consistency between replicas. For data that favors consistency over throughput (e.g., network policy), Onix provides a transactional database. For other data that can tolerate inconsistencies in exchange for increased throughput (e.g., transient network state, such as whether a link is up or down), Onix provides a distributed hash table. The Onix cluster exposes inconsistencies to API clients so that application-specific logic is applied to resolve any inconsistencies.

C. Outcomes

Onix enables the network infrastructure to be treated as a resource pool. A client application (e.g., OpenStack [1]) can reconfigure the network by modifying the NIB. When the workload changes, this application can automatically re-provision network resources. This on-demand re-provisioning allows applications to view the network as a resource pool and not static physical infrastructure.

The scalability features of Onix enable it to manage large data center networks. Onix replicas can be placed throughout the network to minimize the latency to the switches. Unlike Ethane, Onix allows applications to install rules into switches in a proactive way; only unexpected traffic is seen by the controller, which minimizes the necessary bandwidth to reach the controller. The evaluation by Koponen et al. shows that Onix nodes are capable of forwarding more than an order of magnitude more packets per second (i.e., more than 100,000) than Ethane, while sustaining connections to more than 1,000 switches. When this single node performance is combined with the workload sharding capability of an Onix cluster, the system is capable of managing the network of a modern data center.

D. Problems

Onix is designed to only be configured by a single client application. Because the NIB reflects the physical infrastruc-ture, the network is a physical resource pool, not a virtual

resource pool. Although Onix enables automatic provisioning, this provisioning must respect the physical aspects of the network (e.g., addressing schemes and broadcast domains). This means that workloads need to be modified to correspond to the specific characteristics of the physical infrastructure. This prevents enterprise workloads from migrating to hosted service providers that utilize Onix for managing the network; it is too difficult to modify the network configuration that enterprise applications rely upon.

V. MULTI-TENANTDATACENTERNETWORKS

Koponen et al. extend their original work with Onix to create the Network Virtualization Platform (NVP) to manage virtual overlay networks [8]. NVP exposes each virtual overlay network as an Onix NIB; the NVP controller translates the virtual NIB and inserts the resulting state into a software data plane of virtual switches running on compute hypervisors.

A. Definition

Virtual overlay networks are logical networks that are con-structed on top of a physical topology; the physical topology does not necessarily have to match the logical view of the overlay. For example, what a logical network views as a single link may be composed of multiple physical links. Traditional networks can use static configuration to construct overlay networks. However, a software-defined data center network must keep this mapping between the logical and physical topologies in the control plane state in order to dynamically reconfigure overlays.

B. Virtual Overlays and the Control Plane

NVP exposes separate Onix-style NIBs, each of which represents a unique virtual overlay network, to different API clients; each client application configures an overlay network by manipulating the NIB. In Onix [9], there is only a single NIB that corresponds to the physical network; the single client application may only configure existing network hardware. In NVP, each client application may add arbitrary virtual network hardware (e.g., virtual switches, routers, links, etc.) to the virtual NIBs exposed by the platform, regardless of the underlying physical hardware.

NVP translates the data in each NIB to enforce isolation between each of the virtual overlay networks. The controller modifies the rule tables of each switch in the NIB by inserting rules that match each overlay network’s unique identifiers. These extra rules are not visible to any client application. By forcefully adding these rules to each NIB, NVP isolates each overlay network by making it impossible for any client to accidentally or maliciously insert a rule into a switch’s table in their own NIB that could send a packet into another virtual network.

(5)

After translating each client’s NIB, NVP sends the re-sulting virtual network topology to the software switches that comprise the data plane. These software switches reside on compute hypervisors, which allows them to interpose on virtual machine network traffic, or on dedicated servers for an auxiliary function (e.g., to serve as a gateway between a virtual network and a physical network). The data plane in NVP is entirely in software; the platform is agnostic to the physical network as long as there is L3 connectivity between all components.

C. Software Pipeline

The traversal of a client’s virtual network is performed by the software switch on the physical location where the packet first enters the virtualized network. This would be the hypervi-sor’s virtual switch for virtual machines on the virtual network, and gateway devices for external traffic entering the virtual network. The software switch simulates the path a packet would take through a traditional network (e.g., processing by routers or switches, traversing links). Each virtual switch or router in the virtual network topology determines the path a packet will take through the virtual network and can modify the packet (e.g., decrementing the TTL).

After processing a packet through a client’s virtual network pipeline, a software switch sends the packet through a tunnel to the software switch of the physical destination. When the software switch has processed a packet through an entire virtual overlay network, it translates the virtual destination (e.g., a virtual machine) into a physical destination (e.g., the hypervisor that manages the virtual machine). To send a packet through a tunnel, the software switch adds some metadata to the packet and encapsulates it with the physical address (e.g., the IP address) of the destination software switch. Upon receiving a packet through a tunnel, a software switch uses the metadata to determine which actions to perform next (e.g., send the packet to a local virtual machine).

D. Outcomes

Virtual overlay networks enable enterprise workloads to migrate to hosted services. By allowing each workload to configure its own virtual overlay network, enterprise appli-cations can directly port their existing network configuration to a hosted service. Service providers, such as Rackspace, have used the NVP platform to facilitate this migration. Virtual overlay networks are also useful for other operators of large networks; AT&T and NTT use NVP to manage their internal networks, while eBay and Fidelity Investments use it to manage their internal data center networks.

The software data plane enables the rapid deployment of new features and the ability to deploy NVP in existing data centers without a hardware upgrade. To install NVP, data center operators must install the software switch (Open vSwitch [13]) on the hypervisors and allocate compute re-sources for running the control cluster. Even though the data plane eschews sophisticated hardware, each software switch running on a hypervisor is capable of forwarding traffic at 9.3Gbps with a 10Gbps NIC.

E. Problems

The data plane performance does not work well under all workloads. The benchmark mentioned above is achieved for a single TCP stream; in this test, the routing decision for the stream is made once and the result is cached as an exact-match rule. Workloads that are composed of large numbers of small network flows see lower performance in the software data plane. In these scenarios, the caching of decisions is less effective because of the lower degree of locality. A cache miss is expensive in the software switch; the packet must traverse a large part (or all) of the virtual overlay network if it does not have a cache hit.

VI. PROPOSAL

My current and future research focuses on using hardware to improve the performance of software-defined networks. This section outlines my current work with virtual flow tables.

A. Scenario

The small size of rule tables in hardware switches prevents their adoption in software-defined networks. Software-defined network control planes generate more rules than the network switches can accommodate. This discrepancy is especially poignant in an NVP-like system; if clients specify virtual overlay networks that cannot be mapped onto the physical topology (e.g., a large broadcast domain in the virtual overlay network on top of a physical network with very small broad-cast domains), the number of rules that must be installed in each switch (typically in the tens of millions) is much larger than the number of rules in a typical switch (switch TCAMs have a few thousand entries). The status quo is to use software switches because these myriad rules fit in memory. However, software switching cannot use the specialized capabilities (e.g., TCAMs) of hardware switches. This trade-off incurs a performance cost.

Current software-defined networks store excess rules (i.e., those that cannot fit in the switch TCAMs) at the controller; the data plane redirects packets to the controller for a forwarding decision if they do not match rules in the physical data plane. However, the out-of-band networks that data centers typically use for control plane traffic have significantly lower bandwidth than the networks that carry the data plane traffic. Latency between a switch and a controller makes it difficult for the controller to accurately collect flow statistics. Without these statistics, a controller has a difficult time deciding which rules to insert into the hardware data plane.

New hardware switches are equipped with powerful com-puters (known as packet processors). These comcom-puters have gigabytes of memory, multiple cores, and can run traditional software (e.g., Linux). Packets can be redirected from the switch ASIC to the packet processor and vice versa.

B. Virtual Flow Tables

My current research focuses on using packet processors to expose a new abstraction: a virtual flow table. A physical flow

(6)

table is the TCAM-based pipeline in a switch ASIC; it contains a series of rules that match a packet to one or more actions. This resource can be virtualized by implementing the same functionality in software that runs on a packet processor. When a switch interacts with a switch that implements a virtual flow table, it modifies entries in the virtual flow table instead of the physical flow table. Because the virtual flow table is limited by the size of memory (gigabytes) instead of the size of a TCAM (kilobytes), it can hold many more rules than the physical flow table.

A switch with a virtual flow table runs a hybrid data plane: fast forwarding runs in the traditional hardware data plane while slower forwarding runs in a software data plane. A program running on the packet processor observes the rate of traffic being processed by the rules in both data planes and decides the “weight” of each rule (i.e., how much traffic each rule is processing). The program inserts the “heaviest” rules into hardware to maximize data plane forwarding performance. By using the DRAM (slow and large) to virtualize the physical flow table (fast and small), it is possible to obtain the effects of a large and fast flow table if the optimal (i.e., “heaviest”) rules are inserted into the hardware data plane. The latency between the software and hardware data planes is sufficiently low for the software data plane to make an accurate decision about which rules to insert into the TCAM. The current focus of this project is to find an algorithm to optimize this decision to maximize bandwidth carried by the hardware data plane.

REFERENCES [1] Openstack.website, OpenStack Community.

[2] M. Al-Fares, A. Loukissas, and A. Vahdat. A Scalable, Commodity Data Center Network Architecture. InProceedings of the ACM SIGCOMM

2008 Conference on Data Communication, SIGCOMM ’08’, pages 63–

74, New York, NY, USA, 2008. ACM.

[3] M. Casado. Data Center Networking in the Era of Overlays. Presenta-tion, Open Networking Summit, 2012.

[4] M. Casado, M. J. Freedman, J. Pettit, J. Luo, N. McKeown, and S. Shenker. Ethane: Taking Control of the Enterprise. InProceedings of the 2007 Conference on Applications, Technologies, Architectures, and

Protocols for Computer Communications, SIGCOMM 2007, page 112,

New York, NY, USA, 2007. ACM.

[5] D. D. Clark. The Design Philosophy of the DARPA Internet Protocols.

SIGCOMM Comput. Commun. Rev., 25(1):102–111, Jan. 1995.

[6] B. Hindman, A. Konwinski, M. Zaharia, A. Ghodsi, A. D. Joseph, R. Katz, S. Shenker, and I. Stoica. Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center. NSDI 2011, 2011.

[7] U. H¨oelzle. OpenFlow at Google.Keynote, Open Networking Summit, 2012.

[8] T. Koponen, K. Amidon, P. Balland, M. Casado, A. Chanda, B. Fulton, I. Ganichev, J. Gross, P. Ingram, E. Jackson, A. Lambeth, R. Lenglet, S.-H. Li, A. Padmanabhan, J. Pettit, B. Pfaff, R. Ramanathan, S. Shenker, A. Shieh, J. Stribling, P. Thakkar, D. Wendlandt, A. Yip, and R. Zhang. Network Virtualization in Multi-tenant Datacenters. Seattle, WA, 2014. USENIX.

[9] T. Koponen, M. Casado, N. Gude, J. Stribling, L. Poutievski, M. Zhu, R. Ramanathan, Y. Iwata, H. Inoue, T. Hama, and S. Shenker. Onix: A Distributed Control Platform for Large-scale Production Networks.

InProceedings of the 9th USENIX Conference on Operating Systems

Design and Implementation, OSDI’10, page 16, Berkeley, CA, USA,

2010. USENIX Association.

[10] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner. OpenFlow: enabling innovation in campus networks.ACM SIGCOMM Computer Communication Review, 38(2):69–74, 2008.

[11] P. Mockapetris and K. J. Dunlap. Development of the Domain Name System.SIGCOMM Comput. Commun. Rev., 18(4):123–133, Aug. 1988.

[12] D. Oppenheimer, A. Ganapathi, and D. A. Patterson. Why Do Internet Services Fail, and What Can Be Done About It? InProceedings of the 4th Conference on USENIX Symposium on Internet Technologies and Systems - Volume 4, USITS’03, pages 1–1, Berkeley, CA, USA, 2003. USENIX Association.

[13] B. Pfaff, J. Pettit, K. Amidon, M. Casado, T. Koponen, and S. Shenker. Extending Networking into the Virtualization Layer. HotNets ’09. [14] D. Plummer. Ethernet Address Resolution Protocol: Or Converting

Network Protocol Addresses to 48.bit Ethernet Address for Transmission on Ethernet Hardware.

[15] A. Vahdat. Google’s Experience with Software-Defined Network Func-tion VirtualizaFunc-tion at Scale. Keynote Presentation, Open Networking Summit, 2014.

Figure

Updating...

References