Ph.D. Research Plan. Designing a Data Center Network Based on Software Defined Networking

12  Download (0)

Full text

(1)

Ph.D. Research Plan

Title:

Designing a Data Center Network Based on

Software Defined Networking

Submitted

by

Nabajyoti Medhi

Department of CSE, NIT Meghalaya

Under the guidance of

Prof. D. K. Saikia

(2)

2

INDEX

1.

Introduction --- 3-5

2.

Literature Survey --- 5-8

3.

Open Issues --- 9

4.

Research Plan --- 10-11

5.

References --- 11-12

(3)

3

1.

Introduction

In a present day data center in the cloud, the computer systems are connected in a high speed interconnection network formed using high radix switches and high capacity links. These data centers are built to provide environment for wide range of applications such as- on-line financial transaction processing, multimedia content delivery, and highly compute intensive service. Present day enterprises rely heavily on these data centers for the centralized control and management they facilitate. Reliability, high availability, high performance and ability to support growing computing needs are key issues in the design of a data center.

Some of the important characteristics of data centers are- highly scalable interconnection network, use of virtualization and multi-tenancy. Tens of thousands of servers are interconnected in a network to achieve the required aggregate computational capacity. A large number of low cost commodity devices are used to build the interconnection network to achieve the desired performance cost effectively with ease to scale up as per requirement.

Use of virtualization has been very successful in data centers to provide multiple applications/OSes and support multi-tenancy in a large scale. It has also become common to move virtual machines dynamically to transfer the processes between physical machines for optimization of performance. With a large number of virtual machines running and migration of virtual machines taking place, the application traffic becomes highly unpredictable

Tree like scale-up topologies that have been commonly used in data center networks require high end backbone switches as one moves up in the hierarchy. When network size continues to grow, the scale-up topologies have a very high cost. The trend therefore in data center networks is to follow the scale-out design using low cost commodity switches [37]. Commodity switches are also used in modular data center networks [41]. Modular data center networks are the portable data centers which can be transported in shipping containers to avoid the restrictions of geographic limitations. The network topology needs to have a symmetric and modular structure for the scale-out design.

The data center networks commonly have multiple stages with each stage consisting of similar interconnecting devices. These networks require good connectivity for fast unhindered information flow among various resource sharing host terminals. In data center networks, traffics are of two types, north-south and east-west. North-south traffic is the traffic between WAN user and the server and east-west traffic [35] is the traffic among the host servers within the data center. North south traffic grows high when number of user increases.

The east west traffic occurs due to the traffic among the virtual machines residing in the servers. A host server runs several VMs for better resource utilization. Communicating processes running on different VMs produce significant data traffic. Load balancing and Traffic Engineering requires VMs to be migrated from one server to another and leads to generation of significant traffic volume. These also lead to unpredictability in the traffic.

The large volume and unpredictable nature of the traffic along with the need for maintaining low latency in the communications require the data center network to have low hop count, high bandwidth between nodes. This implies that the network possess good topological properties of low diameter, high bisection width. Requirement of high scalability and economy of cost implies that the network uses commodity switches with possibly low bandwidth ports. To provide for high bandwidth between nodes therefore there is need for multiple parallel paths. Need for reliability and fault tolerance also requires that there are multiple paths between nodes. Thus it is preferable in the data

(4)

4

center networks (DCN) to use high radix switches with more number of thin ports than the low radix switches with fat ports [36].

In traditional hierarchical structure of a DCN topology, layers consist of variety of switching devices for different purposes. For example, a 3-layer Fat Tree [4, 20] network, the three layers of switches commonly are- core switches (CS), forwarding / aggregation switches (FS) and access switches (AS). Core switches are generally L3 switches that connect the network to the WAN via WAN router. Core switches route the traffic between WAN and the data center servers. Aggregation switches are L2 or L3 switches that distribute the traffic received from the core switches among access switches and in the reverse direction, aggregation switches takes the traffic from access switches and distribute among the core switches. Access switches are the L2 switches that act between host servers and the aggregation switches. Access switches are placed on the top of server racks in a ToR (Top of Rack) design [42] and in an EoR (End of Rack) design server racks are placed between access switches that are located at the two edges. ToR design is preferred to EoR in many cases as ToR design leads to less cabling complexity and less power consumption [43].

Fat Tree [4, 20] is the most popular and widely used topology for data center networks. It is also termed as Folded Clos topology [20]. HyperX [5] is another popular topology that inherits the characteristics of fattened butterfly and HyperCube topologies [6]. VL2 [13] is a network architecture that is built using low-cost switches arranged into a Clos topology, which provides extensive path diversity between arbitrary two hosts. Its design goal is to provide scalable DCNs with uniform high capacity bandwidth between hosts, performance isolation between services, and layer-2 semantics for Plug-and-Play. VL2 adopted Valiant Load Balancing (VLB) [14] for managing a DCN in terms of workload, traffic, and failure patterns. BCube [10], which is designed for shipping-container based modular data centers. It takes a server centric approach using a recursively-defined structure. It supports various bandwidth-intensive applications including one-to-one, one-to-several, one-to-all, or all-to-all traffic patterns. It also supports graceful performance degradation as a host or switch failure occur. The specific design feature of BCube is that it has multiple parallel short paths between any pair of hosts. BCube requires each server to be connected via two ports which requires each server to be configured to support multi-homing feature. Jellyfish [11] overworks a degree-bounded random graph topology among ToR (Top-of-Rack) switches to incrementally expand a DCN. Due to its randomness property, the topology allows great flexibility, which leads to easy addition of new devices, support of heterogeneity, and construction of arbitrary-size networks. But managing the network is very difficult when network size is needed to be changed. DCell [9] takes a server centric approach in which each end host has an important role in interconnecting hosts and routing traffic between two hosts. It uses a recursively-defined structure and each host connects to different levels of DCells via multiple links. Its recursive structure makes it possible to use only mini-switches to scale up instead of high-end switches. DCell is fault tolerant and supports high network capacity, because it provides rich physical connectivity between servers. However, its design requires higher wiring cost to interconnect the servers and additional functionalities on servers to route traffic.

Addressing is an important issue in DCNs. IP based routing in a DCN is not the most suitable one as it leads to excessive ARP control traffic. PortLand [8] is a scalable and fault-tolerant layer 2 routing protocol for DCNs working on Fat Tree topology. The main characteristic of PortLand is that it distinguishes Actual MAC (AMAC) and Pseudo MAC (PMAC) of end hosts. PMACs are monitored by a central fabric manager. PMAC contains location information of a host server in the Fat Tree topology and all packet forwarding is carried out using this PMAC. PortLand was the initial introduction of Software Defined Network (SDN) [1] based approach in data center networks. Portland’s fabric manager worked as the SDN controller which managed the location discovery process.

(5)

5

The present day trend in data center networks is to optimize the network infrastructures through use of Software Defined Networking (SDN). Dominant data center players like Google, Facebook are shifting towards SDN for their data centers. Google introduced their data center network approach called B4 network [7] using SDN controllers over OpenFlow switches [22]. OpenFlow [2] is an SDN protocol which gives access to the forwarding plane of the network. The network trend is shifting towards SDN for optimizing the Traffic Engineering in data centers.

Structured topologies like Fat Trees [4, 20] enable us to deploy routing algorithms easily. These properties are hard to achieve in an unstructured topology like jellyfish [11]. But, network expansion is easier to do in an unstructured topology than in a structured one. A random topology like jellyfish is easy to expand but it is harder to maintain in real life as the network size grows. Fat Tree topology gives a better structure but increasing the network size requires lots of re-wiring and device replacement.

In the present situation, despite many new topology proposals, an easily expandable and high radix topology is required which can fulfil some additional desired topological properties of a DC network e.g., faster ingress/egress traffic distribution between core and access switches, higher overall network throughput, higher symmetry, higher regularity, higher bandwidth etc. Presently deployment of newer network trend like SDN improves the network performance by reducing congestion, reducing single (or multiple) point(s) of failure(s) etc. A central OpenFlow controller [3] improves the routing but degrades the scalability and increases the controller delay as the network size increases. Use of multiple OpenFlow controllers in different network segments increases the scalability of the network but this needs the network to be built of regular symmetric segments. Similar protocols and routing algorithms are applicable in all the segments of a topology like that. In this paper, it has been attempted to propose a new topology which supports the above mentioned features.

A proper data center networking strategy plays a very significant role in any large data center. As the traffic requirement of the DC grows up, hardware cost of a DCN also grows up with the increase in the number of switches, routers, links and other networking components. An unmanaged network performs very poorly in terms of network delay, throughput, jitter etc. as the network size increases. Scalability is another issue which needs much attention as the network grows. Traffic capacity of the network should expand in the same scale as that of the network size. A DCN must be robust towards failures. A managed DCN should not get disrupted by a single/multiple point(s) of failure. Route management is an important issue while dealing with large traffic flows. A DCN should be able to properly distribute large flows among separate routes to avoid congestion. WAN traffic as well as intra DC traffic distribution should be done with less delay.

Therefore there exists the need for a new DCN topology that lends itself to the present and future requirements of a data centre in the presence of SDN.

2.

Literature survey

2.1

Software Defined Network:

Software Defined Networking (SDN) [1] is an emerging paradigm in the networking

research and industry areas to work with the recently growing demands for flexible and agile

network control. The main driving factors of the new demands are the explosion of mobile

devices and content, server virtualization, advent of cloud services, and massive parallel

processing of big data. However, current conventional networking technologies are not

suitable for meeting the current market demands because they are too complex and static,

(6)

6

unable to scale, and dependent on network equipment vendors. This discrepancy between the

market demands and the current networking technologies has brought the research and

industry to a leaning point. SDN gives the idea of physical separation of the network control

plane from the forwarding plane, and where a control plane controls several devices. This

availability of control in programmable computing devices enables the underlying network

infrastructures to be abstracted for applications and network services. The programmable

control plane also enables flexible and rapid modifications of network behaviour. The SDN

controller, the core of SDN concept, maintains a global view of the network and controls

network devices following the logic from the application layer. As a result, the network

infrastructure appears to the applications as a single logical point. Due to these features, a

network administrator can independently control the entire network from a single logical

point and this simplifies the network design and operation. In addition, SDN simplifies the

network devices as well because the devices no longer need to implement and process many

network protocol standards; instead they just need to receive instructions from the SDN

controller and execute the instructions.

Using SDN, we can take a complex routing decision, which is not possible in the

current shortest path-based routing, to efficiently handle dynamically changing traffic pattern,

or we can quickly create or modify virtual networks in a cloud computing environment. In

addition, we can deploy a high capacity network at low cost and implement complex network

policies, which govern security and access control to network infrastructures immediately.

2.2

OpenFlow:

OpenFlow [2] is the SDN protocol which facilitates communication interfaces between

SDN controllers and network devices, is considered as the most appropriate technology for

realizing SDN. OpenFlow allows direct access of the forwarding plane of network devices.

OpenFlow 1.0 [25] is the first standard release and currently it is the most widely adopted

version by the OpenFlow switch vendors and SDN controller developers among itself and its

later releases [26-29]. OpenFlow uses flows to control network traffic based on pre-defined

or dynamic match rules that can be specified by a SDN controller. The SDN controller can be

programmed to define how traffic should be handled by OpenFlow switches using OpenFlow

protocol based on parameters such as usage patterns, applications, and security policy. An

OpenFlow based SDN architecture provides highly granular control because it allows the

network to be controlled on a per-flow basis, which current IP-based routing does not

provide.

2.3

Data Center Network Topology and Traffic Engineering

Significant amount of works have been carried out in recent years regarding data center

network topologies and Traffic Engineering. Few of the approaches towards DC topology are

FatTtree [20], HyperX [5], DCell [9], BCube [10], JellyFish [11] etc. The Fat Tree based

DCN uses regular addressing scheme, which simplifies building processes of routing table,

and it provides static ECMP (Equal Cost Multi path) [12] to distribute traffic among multiple

equal-cost paths by using two-level routing tables. The realization of the two-level routing

scheme requires switch modifications, which can be easily implemented using OpenFlow

switches.

(7)

7

2.3.1

Works on Topology for DCN:

The most commonly used DC interconnection topology is the Fat Tree topology [4,

20]. Fat Tree topology is useful for its multipath ability and non-blocking network structure.

VL2 [13] is a network architecture that is built using low-cost switches arranged into

a Clos topology, which provides extensive path diversity between arbitrary two hosts. Its

design goal is to provide scalable DCNs with uniform high capacity between hosts,

performance isolation between services, and layer-2 semantics for Plug-and-Play. VL2

adopted Valiant Load Balancing (VLB) [14] to cope with volatility of a DCN in terms of

workload, traffic, and failure patterns. The VLB is a method to spread traffic uniformly

across network paths by randomly selecting a path without centralized coordination or Traffic

Engineering. Experiment results showed that the VLB achieves both the uniform capacity

and performance isolation objectives.

BCube [10], which is designed for shipping-container based modular DCs, takes a

server centric approach using a recursively-defined structure. It supports various

bandwidth-intensive applications including one-to-one, one-to-several, one-to-all, or all-to-all traffic

patterns. It also supports graceful performance degradation as a host or switch failure rate

increases. These two advantages stem from the design feature of BCube that it has multiple

parallel short paths between any pair of hosts.

Jellyfish [11] exploits a degree-bounded random graph topology among ToR switches

to incrementally expand a DCN. The randomness of the topology allows great flexibility,

which leads to easy addition of new components, support of heterogeneity, and construction

of arbitrary-size networks. Surprisingly, Jellyfish supports more hosts, has lower mean path

length, and is more resilient to failures than a Fat Tree.

DCell [9] takes a server centric approach, which means that each end host has an

important role for interconnecting hosts and routing traffic between two hosts. It uses a

recursively-defined structure and each host connects to different levels of DCells via multiple

links. Its recursive structure makes it possible to use only mini-switches to scale up instead of

high-end switches. DCell is fault tolerant and supports high network capacity, because it

provides rich physical connectivity between servers. However, its design requires higher

wiring cost to interconnect the servers and additional functionalities on servers to route

traffic.

Recently, Google introduced their data center network approach called B4 network

[7] using SDN on WAN scale. The network trend is shifting towards SDN for having a better

control over the network and to optimize the Traffic Engineering in DCN.

2.3.2

Works on addressing schemes for DCN:

Existing DCNs commonly use traditional IP based as well as L2 based routing for their packet traffic. IP based addressing generate a large amount of ARP control traffic in a DCN environment. To overcome this PortLand [8] proposes a scalable and fault-tolerant layer 2 addressing for DCNs working on Fat Tree topology. Fat Tree design uses a conventional flat addressing. PortLand updates this design by creating an addressing scheme which follows a hierarchical fashion and specifies pod ID, switch ID and host ID in separate fields of an address. PortLand termed this kind of address as the Pseudo MAC (PMAC) address which is mapped to an Actual MAC (AMAC) address via a central

(8)

8

fabric manager. PortLand provides the idea of location based addressing and routing based on Fat Tree topology. PMAC contains location information of an end host in the Fat Tree topology, and all packet forwarding is carried out using this PMAC. PortLand gives the idea of a two-level routing tables which spread outgoing traffic on the low-order bits of the destination IP addresses, and such mechanisms can significantly reduce the amount of entries in a routing table. PortLand gives the idea of a central fabric manager based approach which can control the overall network.

2.3.3 Works on Traffic Engineering in DCN:

There have been works to come up with suitable Traffic Engineering model for DCN that can optimize its performance. Some of the significant works are Hedera [15], microTE [16], PEFT [17], DLB [18], and ElasticTree [19].

Hedera [15], a dynamic flow routing system for a multi-stage switching fabric, utilizes a centralized approach to route elephant flows exceeding 10 percent of the host Network Interface Card (NIC) bandwidth while it utilizes static ECMP(Equal-Cost Multi-Path) algorithm [12] for the rest short lived mice flows. The purpose of Hedera is maximizing bisection bandwidth of a DCN by appropriately placing the elephant flows among multiple alternative paths; estimated flow demands of the elephant flows are used for the placement.

Benson et al. proposed microTE [16], a centralized system that adapts to traffic variations by leveraging the short term predictability of the DCN traffic, to achieve fine grained TE. To do so, it constantly monitors traffic variations, determines which ToR pairs have predictable traffic, and assigns the predicted traffic to the optimal path, which minimizes Maximum Link Utilization (MLU). Similar to Hedera, the remaining unpredictable traffic is then routed using weighted ECMP, where the weights reflect the available capacity after the predictable traffic has been assigned.

Penalizing Exponential Flow-splitting (PEFT) was originally proposed by Xu et al. [17] to achieve optimal Traffic Engineering for wide-area ISP networks, where traffic demands are rather static and predictable. Switches running PEFT make forwarding and traffic splitting decisions locally and independent of each other, i.e. the TE in PEFT is done in a hop-by-hop manner. In addition, packets, even in a same flow, can be forwarded through a set of unequal cost paths by exponentially penalizing higher cost paths.

Dynamic Load Balancing (DLB) [18] is the centralized algorithm that distributes traffic of upcoming network flows and makes each alternative path receive equal amounts of traffic. It is designed for an OpenFlow based DCN that follows a Fat Tree topology. The algorithm utilizes the hierarchical feature of the Fat Tree to recursively search for paths between a given source and a destination. It then makes decisions based on real-time traffic statistics obtained every 5 seconds from OpenFlow switches. This DLB algorithm is specifically dependent on Fat Tree topology, so it cannot be applied to other DCN topologies. Moreover, the algorithm selects the path by considering only local conditions, so its path allocation is a local optimal solution.

ElasticTree [19] is a centralized system for dynamically adapting the energy consumption of a DCN by turning off the links and switches that are not essential to meet the varying traffic demands during a period. The fact that traffic can be satisfied most of the time by a subset of the network links and switches makes ElasticTree feasible. To find the minimum subset topology, i.e. essential links and switches, ElasticTree proposed three different algorithms with different optimality and scalability models: (i) a Linear Programming (LP)-based formal model, (ii) a greedy bin packing heuristic model, and (iii) a topology-aware heuristic model.

(9)

9

3.

Open issues

There are several issues related to SDN that the researchers are working on. One of these is the search for a more suitable topology for data center networks that lend itself better to the peculiarities of the network in the presence of SDN. Each of the existing and proposed DCN topologies discussed in the previous section has some good aspects. But, each of them fails to satisfy some of the DCN needs. Fat Tree topology [4] has good multipath features but when network size expands, re-wiring and re-structuring of the network becomes complex. Jellyfish [11] allows us to build and extend the topology randomly but network management is very much difficult as due to lack of symmetry in the structure.

Nowadays, vendors are choosing modular design for their data centers due the flexibility of the design in terms of deployment. Further, low cost Commercial-Off-The-Shelf (COTS) switches are replacing the high end switches. BCube [10] provides such a good modular DCN design with servers as the relay nodes. But with the servers as the relay nodes, it creates much load on servers and it suffers from capacity limitation. Further, the servers must have multiple high capacity network interfaces (NICs) to support high data rates. DCell [9] needs higher wiring cost to interconnect the servers and additional functionalities on servers to route traffic. Deployment of openflow is difficult in BCube and DCell, as servers need to have additional functionalities to act as an OpenFlow switch.

Next issue is regarding selecting proper addressing and routing scheme for a DCN. A good addressing scheme optimizes the routing performances and also helps in avoiding congestions. With SDN, it is possible to implement a new addressing and routing scheme over the whole network from a central point. Portland [8] provides a location based layer 2 addressing scheme for Fat Tree topology. This design of Portland simply uses hash function to map traffic to routes and neglects the real-time traffic load situation on different routes. Without dynamic load balancing, Portland is unable to achieve fair link utilization and network scalability. In PortLand, a failure in fabric manager may lead to whole network failure. SDN controller can provide better fault tolerance and recovery as it can be deployed centrally as well as in a distributed manner.

Another issue in present DCNs to deal with is the excessive growth of operational expenditure as the network size becomes larger. A good Traffic Engineering solution can minimize the OpEX in a DCN. Several works have already been done on Traffic Engineering in DCN. In the previous section, we also discussed a few significant works in the area of Traffic Engineering. Hedera [15] faces a limitation that it lets the static ECMP routes to the mice flows, which comprise more than 80% of the DCN traffic. In other words, the Traffic Engineering approach in Hedera can deal with only 20% of DCN traffic. The major short-comings of microTE [16] are twofold: a) it lacks of scalability due to its extremely short execution cycle, i.e. every 2 s, and b) it requires host modifications to collect fine-grained traffic data. PEFT [17] imposes heavy load on switches because each switch in PEFT has to measure traffic volume incoming to and outgoing from its ports and it has to calculate optimal routing paths using the measured traffic data. Another limitation of PEFT is that routing decisions of PEFT switches are not global optimal solutions because each PEFT switch carries out TE locally and independently. Finally, PEFT delivers packets in the same flow through a set of unequal cost paths by splitting them and this causes a packet reordering problem. DLB [18] lacks of scalability and responsiveness for handling large number of flows, because it has to be executed in a centralized controller each time a new flow has appeared in the DCN. The main problem of ElasticTree [19] is that its flow allocation algorithms probably cause severe link congestions because they allocate flows to links while maximizing the utilization of link capacity. Finally and the most importantly, a general TE mechanism may not be suitable for all the topologies.

(10)

10

4.

Research plan

The objective of this research is to address the open issues in a SDN based data center network. The overall research plan can be divided into the following three main research goals:

1. Designing a new DCN topology that addresses the scalability and performance issues,

2. Developing a suitable layer 2 addressing scheme and corresponding set of routing schemes for the new topology,

3. Come-up with suitable Traffic Engineering solutions for the new topology.

The proposed research is subdivided into several objectives as shown in the table below. Each objective requires performance of some tasks as tabulated. Time allocation to perform the various tasks is shown in the form of a Gantt chart. Efforts will be made to carry out the work as per this plan.

Research plan Gantt Chart (Schedule of Major Activities)

Years 2013 2014 2015 2016

Obj. Task Autumn Spring Autumn Spring Autumn Spring

O1 T1 T2 T3 T4 O2 T5 T6 T7 O3 T8 T9 T10 O4 T11 T12

Table: Research objectives and related tasks

O1 Literature Review

T1 Study of concepts behind Software Defined Networking T2 Exploring OpenFlow controllers and Mininet

T3 Study of data center architectures

T4 Study of existing data center Traffic Engineering mechanisms

O2 Designing a suitable DCN architecture

T5 Finding a new DC topology T6 Study the topological properties

T7 Compare with existing topologies in terms of topological properties

O3 Developing suitable L2 addressing and routing schemes

T8 Developing L2 addressing scheme T9 Developing routing scheme

T10 Implementing the addressing and routing scheme using SDN controller

O4 Developing a suitable Traffic Engineering solution

T11 Developing a Traffic Engineering solution

T12 Implementing the Traffic Engineering solution using SDN controller

O5 Testing the proposed design

T13 Simulation study of performance of topologies under several traffic scenario T14 Performance study of the Traffic Engineering schemes

T15 Reworking and verification of the overall architecture

O6 Finalizing the work and thesis writing

T16 Final findings and conclusions T17 Determining limitations T18 Writing the Thesis

(11)

11 O5 T13 T14 T15 O6 T16 T17 T18

References

1.

“Software-defined networking (SDN) definition." https://www.opennetworking.org/sdn-resources/sdn-definition.

2.

N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: Enabling innovation in campus networks", ACM SIGCOMM Computer Communications Review, vol. 38, no. 2, pp. 69-119, 2008.

3.

Floodlight OpenFlow controller. "http://www.projectfloodlight.org/floodlight/”.

4.

M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable, commodity data center network architecture", in Proc. ACM SIGCOMM '08, (Seattle, USA), pp. 63-74, Aug. 17-22, 2008.

5.

Sadoon Azizi, Farshad Safaei , Naser Hashemi, “On the topological properties of HyperX”,

Journal of Supercomputing, 66:572–593, Published online: 25 April 2013, Springer Science, Business Media New York 2013

6.

L.N. Bhuyan, D.P. Agrawal, “Generalized hypercube and hyperbus structures for a computer network”. IEEE Trans Computing 33(4):323–333, 1984

7.

S. Jain, A. Kumar, S. Mandal, J. Ong, L. Poutievski, A. Singh, S. Venkata, J. Wanderer, J. Zhou, M. Zhu, and J. Zolla, “B4: Experience with a globally deployed Software Defined WAN”, in Proc. ACM SIGCOMM '13, (Hong Kong,China), pp. 3-14, Aug. 12-16, 2013.

8.

R. N. Mysore, A. Pamboris, N. Farrington, N. Huang, P. Miri, S. Radhakrishnan, V. Subramanya, and A. Vahdat, “PortLand: A scalable fault-tolerant layer 2 data center network fabric", in Proc. ACM SIGCOMM '09, (Barcelona, Spain), pp. 39-50, Aug. 17-21, 2009.

9.

C. Guo, H. Wu, K. Tan, L. Shi, Y. Zhang, and S. Lu, “DCell: A scalable and fault-tolerant network structure for data centers", in Proc. ACM SIGCOMM '08, (Seattle, USA), pp. 75-86, Aug. 17-22, 2008.

10.

C. Guo, G. Lu, D. Li, H. Wu, X. Zhang, Y. Shi, C. Tian, Y. Zhang, and S. Lu, “BCube: A high performance, server-centric network architecture for modular data centers", in Proc. ACM SIGCOMM '09, (Barcelona, Spain), pp. 63-74, Aug. 17-21, 2009.

11.

A. Singla, C.-Y. Hong, L. Popa, and P. B. Godfrey, “JellyFish: Networking data centers randomly", in Proc. 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI '12), (San Jose, USA), pp. 1-14, Apr. 25-27, 2012.

12.

C. Hopps, “Analysis of an Equal-Cost Multi-Path algorithm”. RFC 2992, Nov. 2000.

13.

A. Greenberg, J. R. Hamilton, N. Jain, S. Kandular, C. Kim, P. Lahiri, D. A. Maltz, P. Patel, and S. Sengupta, “VL2: A scalable and flexible data center network", in Proc. ACM SIGCOMM '09, (Barcelona, Spain), pp. 51-62, Aug. 17-21, 2009.

14.

R. Zhang-Shen and N. McKeown, “Designing a predictable internet backbone network with valiant load-balancing", in Proc. 13th International Conference on Quality of Service (IWQoS '05), vol. 3552 of LNCS, (Passau, Germany), pp. 178-192, June 21-23, 2005.

15.

M. Al-Fares, S. Radhakrishnan, B. Raghavan, N. Huang, and A. Vahdat, “Hedera: Dynamic ow scheduling for data center networks", in Proc. 7th USENIX Symposium on Networked Systems Design and Implementation (NSDI '10), (San Jose, USA), pp. 1-15, Apr. 28-30, 2010.

16.

T. Benson, A. Anand, A. Akella, and M. Zhang, “MicroTE: Fine grained Traffic Engineering for data centers", in Proc. 7th International Conference on emerging Networking Experiments and Technologies (CoNEXT '11), (Tokyo, Japan), pp. 1-12, Dec. 6-9, 2011.

17.

F. P. Tso and D. P. Pezaros, “Improving data center network utilization using near-optimal Traffic Engineering", IEEE Transactions on Parallel and Distributed Systems, vol. 24, pp. 1139-1148, June 2013.

(12)

12

18.

Y. Li and D. Pan, “OpenFlow based load balancing for Fat-Tree networks with multipath support", in Proc. 12th IEEE International Conference on Communications (ICC '13), (Budapest, Hungary), pp. 1-5, June 9-13, 2013.

19.

B. Heller, S. Seetharaman, P. Mahadevan, Y. Yiakoumis, P. Sharma, S. Banerjee, and N. McKeown, “ElasticTree: Saving energy in data center networks”, in Proc. 7th USENIX Symposium on Networked Systems Design and Implementation (NSDI '10), (San Jose, USA), pp. 1-16, Apr. 28-30, 2010.

20.

C. E. Leiserson, “Fat-Trees: Universal networks for hardware-efficient super-computing”, IEEE Transactions on Computers, vol. 34, no. 10, pp. 892-901, 1985

21.

“Mininet: An instant virtual network on your laptop (or other pc)”. http://mininet.org/.

22.

“Open vSwitch: An open virtual switch”. http://openvswitch.org/.

23.

“VMWare: VMware virtualizes computing, from the data center to the cloud to mobile”. http://www.vmware.com/in/.

24.

“VirtualBox: A general-purpose full virtualizer for x86 hardware targeted at server, desktop and embedded use”. https://www.virtualbox.org/.

25.

“OpenFlow switch specification version 1.0.0.", Open Networking Foundation, Dec. 31, 2009.

26.

“OpenFlow switch specification version 1.1.0.", Open Networking Foundation, Feb. 28, 2011.

27.

“OpenFlow switch specification version 1.2.0.", Open Networking Foundation, Dec. 5, 2011.

28.

“OpenFlow switch specification version 1.3.0.", Open Networking Foundation, June 25, 2012.

29.

“OpenFlow switch specification version 1.4.0.", Open Networking Foundation, Oct. 14, 2013.

30.

K. Elmeleegy, A.L. Cox, T.S.E. Ng, “Understanding and Mitigating the Effects of Count to Infinity in Ethernet Networks”, vol. 17, issue 1, IEEE/ACM Transactions on Networking, Publication Year: 2009 ,pp. 186 - 199

31.

“Miniedit, a visual virtual networking building program based on mininet”, https://github.com/mininet/mininet/blob/master/examples/miniedit.py.

32.

“WireShark, a network protocol analyzer”, http://www.wireshark.org/.

33.

“Iperf - The TCP/UDP Bandwidth Measurement Tool. https://iperf.fr/.

34.

“Eclipse, java IDE”, http://www.eclipse.org/.

35.

“Building Virtualization-Optimized Data Center Networks”, technical white paper, HP, 2011. http://h17007.www1.hp.com/docs/mark/4aa3-3346enw.pdf.

36.

J Kim, W.J. Dally, B. Towles, A.K. Gupta, “Microarchitecture of a high-radix router”. In Proc. of the International Symposium on Computer Architecture (ISCA), (Madison, WI), pp. 420-431

37.

A. Vahdat, M. Al-Fares, N. Farrington, R.N. Mysore, “Scale-Out Networking in the Data Center”. IEEE Micro, Vol. 30 Issue 4, 2010, pp. 29-41

38.

C. C. Tu, “Cloud-Scale Data Center Network Architecture”. Monografia, 2011.

39.

N. Handigol, B. Heller, V. Jeyakumar, B. Lantz, N. McKeown, “Reproducible Network Experiments Using Container-Based Emulation”. In Proc. CoNEXT’ 12 (Nice, France), 2012.

40.

T. Benson, A. Akella, and D. A. Maltz, “Network traffic characteristics of data centers in the wild”, in Proc. ACM Internet Measurement Conference 2010 (IMC '10), (Melbourne, Australia), pp. 267-280, Nov. 13, 2010.

41.

S. Radhakrishnan, M. Tewari, R. Kapoor, G. Porter, and A. Vahdat, “Dahu: Commodity Switches for Direct Connect Data Center Networks”. In Proceedings of the 9th ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS13), October 2013

42.

“Data Center Top-of-Rack Architecture Design”, technical white paper, Cisco Systems, 2009. http://www.cisco.com/c/en/us/products/collateral/switches/nexus-5000-series-switches/white_ paper_c11-522337.pdf.

43.

Robert Carlson, Communications Cable and Connectivity Association, “Considerations for choosing top-of-rack in today's fat-tree switch fabric configurations”, Cabling Installation & Maintenance Magazine, April, 2014 Issue, http://www.cccassoc.org/files/4214/0539/1343/ ToR_in_Fat_Tree-CIM4-2014.pdf.

Figure

Updating...

References