• No results found

How To Share Bandwidth On A Diffserv Network

N/A
N/A
Protected

Academic year: 2021

Share "How To Share Bandwidth On A Diffserv Network"

Copied!
6
0
0

Loading.... (view fulltext now)

Full text

(1)

Bandwidth Sharing Scheme in DiffServ-aware MPLS Networks

Norashidah Md Din

*

, Hazlinda Hakimie

*

and Norsheila Fisal

η

*Department of Electrical Engineering, College of Engineering, Universiti Tenaga Nasional, KM 7, Jalan Kajang-Puchong, 43009 Kajang, Selangor, Malaysia

{ norashidah, hazlinda}@uniten.edu.my

ηTelekom’s Laboratory, Faculty of Electrical Engineering

Universiti Teknologi Malaysia, 81310 Skudai, Johor Bahru, Malaysia

sheila@ fke.utm.my

Abstract—

This paper proposes a bandwidth sharing scheme for DiffServ-aware MPLS networks based on the Russian Dolls bandwidth allocation model. Three DiffServ traffic class types were used in the DiffServ-aware MPLS network under study, i.e. real time constant bit rate premium, real time variable bit rate assured and non-real time best effort traffic. We propose a preemption scheme based on rerouting or resizing of the longest existing flow first. A simulation study using ns-2 was performed for the scheme. An analysis was made to show the significance of borrowing and preemption in the environment. The results show significant improvement in terms of blocking probability when both borrowing and preemption are used.

Keywords—DiffServ, Triple Play, Russian Doll, ,Bandwidth Sharing Preemption

I. INTRODUCTION

Traffic engineering is essential for optimum use of transmission capacity and making networks resilient so that they can withstand link or node failures. Multiprotocol Label Switching (MPLS) [1, 2 and 3], traffic engineering ideally routes traffic flows across a network based on the resources the traffic flow requires and the resources available in the network. Path preemption is also possible in MPLS where an existing path can be discontinued so that a higher priority path may be established. The path taken can be reserved through a signalling protocol like RSVP-TE or by using constraint-based routing. RSVP-TE establishes Label Switched Paths (LSPs) in MPLS through path reservation. Constraint-based routing on the other hand ascertains a path that satisfies some constraints of interest like delay, jitter, throughput or loss besides finding for the shortest path only.

An important feature of MPLS is the ability to set up LSPs for different services. Differentiated Services (DiffServ) [4]

technology can complement it in providing the service differentiation required. By mapping the DiffServ traffic to the MPLS LSPs, DiffServ-aware MPLS networks can treat the traffic according to the traffic class.

Bandwidth constraint models have been proposed for DiffServ-aware MPLS environment [5, 6 and 7]. This work proposes an implementation using Russian Dolls bandwidth allocation model. Section II covers related work in bandwidth allocation models whereas Section III describes the bandwidth sharing algorithm with Russian Dolls bandwidth allocation model. Section IV discusses the simulation work and Section V the conclusion.

II. BANDWIDTH ALLOCATION MODELS

Currently, three bandwidth constraint models for DiffServ-enabled MPLS traffic engineering have been provisioned in IETF. The first is the maximum allocation bandwidth constraints model as described in RFC 4125 [5] consist of allocation of bandwidth constraint for each traffic class and a maximum reserved bandwidth value normally associated to link capacity. The summation of the bandwidth constraints can exceed the maximum reserved bandwidth. The higher priority traffic can preempt the lower priority traffic to get their full allocated bandwidth. For example, in a scenario where voice traffic class is allocated a bandwidth constraint of 1.5Gbps, data traffic class is allocated a bandwidth constraint of 2.0 Gbps and the maximum reserved bandwidth is allocated 3.0 Gbps, the aggregate of voice and data traffic would then be limited to 3.0Gbps. The voice LSPs always has higher preemption priority in order to use the 1.5 Gbps capacity. The voice LSPs will preempt the data LSPs to

Proceedings of the 2007 IEEE International Conference on Telecommunications and Malaysia International Conference on Communications, 14-17 May 2007, Penang, Malaysia

(2)

achieve this. The data LSPs can use up to link capacity of the bandwidth left by the voice LSPs.

The second model is the maximum allocation plus bandwidth reservation and protection mechanism as defined in RFC 4126 [6] model is similar to the maximum allocation bandwidth constraints model above in that a maximum bandwidth allocation is given to each traffic class type. However, through the use of bandwidth reservation and protection mechanisms, each traffic class type is only allowed to exceed their bandwidth allocations under conditions of no congestion and need to their allocated bandwidths when overload and congestion occurs.

The third model introduced was the Russian Dolls bandwidth constraints model as outlined in RFC 4127 [7] consist of allocation of bandwidth constraint for each traffic class with a maximum of eight class types. The maximum allowable bandwidth usage is done cumulatively by grouping successive class types according to priority class. A lower priority class can use higher priority class bandwidth up to the summation of their bandwidth constraints values. The higher priority traffic can preempt the lower priority traffic to get their full allocated bandwidth. For example, in a scenario where voice traffic class is allocated a bandwidth constraint of 1.5Gbps, data traffic class is allocated a bandwidth constraint of 3.0 Gbps, the aggregate of voice and data traffic would then be limited to 3.0Gbps. The voice LSPs are confined to 1.5 Gbps capacity. The voice LSPs will preempt the data LSPs when necessary to achieve this. The data LSPs can use up to link capacity of the bandwidth left by the voice LSPs.

In RFC 4128 [8], a performance analysis for the Russian Dolls and maximum allocation models are described. The general theme of the investigation is the trade-off between bandwidth sharing to achieve greater efficiency under normal conditions, and to achieve robust class protection/isolation under overload. The Russian Dolls model was found to allow greater sharing of bandwidth among different classes and performs somewhat better under normal conditions. On the other hand the maximum allocation model does not depend on the use of preemption. However, it provides more robust class isolation under overload. It was concluded in the study that the use of preemption gives higher-priority traffic some degree of immunity to the overloading of other classes. This results in a higher blocking/preemption for the overloaded class than that in a pure blocking environment.

In this work we propose a bandwidth sharing scheme based on preemption by rerouting best effort traffic or resizing adaptive rate assured forwarding traffic class. Thee longest lower priority traffic existing flow will be preempted first.

III. ADMISSION AND BANDWIDTH SHARING

The DiffServ-aware MPLS network model used in this work is shown in Fig. 1 is assumed able to cater for three

classes of multimedia traffic, i.e. real time constant bit rate traffic or premium traffic, real time variable bit rate traffic or assured rate traffic and non real time variable bit rate traffic or best effort traffic. The premium traffic is associated with the DiffServ’s Expedited Forwarding Per Hop Behaviour (EF PHB)[9] and assured rate traffic is associated with Assured Forwarding Per Hop Behavioue (AF PHB) [10]. Whereas the best effort traffic is known as BE PHB which corresponds to the traffic of the traditional Internet.

Fig. 1: DiffServ-aware MPLS Network Model

For admission into the network, each of the traffic flow is allocated a bandwidth value at the ingress node. Each of the EF and AF real time traffic flows are allocated bandwidth according to their peak rate whereas the BE traffic flows are allocated their mean rate.

Bandwidth sharing is based on the Russian Dolls bandwidth allocation model. The RDM used in this work follow the following mode:

• All LSPs with EF PHB use no more than 40% of access bandwidth

• All LSPs from EF and AF PHBs use no more than 80% of access bandwidth

• All LSPs from EF, AF and BE PHBs use no more than

100% of access bandwidth

The access bandwidth per class in the Russian Dolls Model (RDM) is illustrated in Fig. 2. The bandwidth range for each traffic class is shown in Table 1. The lower priority traffic can borrow from the higher priority traffic and the higher priority traffic is able to preempt the lower priority traffic based on the rerouting or resizing of the lower priority traffic first. Resizing occurs for AF traffic since they are of the adaptive rate like MPEG-4 type traffic and rerouting for BE traffic.

The RDM pseudocode with preemption is as follows:

1 Set up simulation time= 3000s 2 Set traffic load

3 Set random arrival rate with mean average required

4 Set random traffic life time with mean average required

5 Configure network topology and traffic parameters Egress Ingress EF AF BE EF AF BE

(3)

6 Continuously start EF, AF and BE sources according to their arrival rates

7 Continuously terminate sources according to their life time

8 Invoke the following admission control process when a source start-time is invoked:

9 For new EF connection:

10 Proc new_traffic_EF

11 Calculate EF available bandwidth 12 Check if EF_new_connection_bw <= available_EF_bw 13 If yes accept_EF_connection 14 else 15 if EF_new_connection_bw > available_EF_bw

16 call RDM preemptor procedure

17 For new AF connection

18 Proc new_traffic_AF

19 Calculate AF available bandwidth 20 Check if AF_new_connection_bw <= available_AF_bw 21 If yes accept_AF_connection 22 else 23 if AF_new_connection_bw > available_∑EF+AF_bw

24 call RDM preemptor procedure

25 For new BE connection

26 Proc new_traffic_BE

27 Calculate BE available bandwidth 28 Check if BE_new_connection_bw <= available_∑EF+AF+BE_bw 29 If yes accept_BE_connection 30 else 31 if BE_new_connection_bw > available_∑EF+AF+BE _bw 32 reject_BE_connection 33 Proc RDM preemptor

34 Preempt or Reroute flows based on longest existing flow first

35 If no flows to preempt or reroute, then reject connection

Lines 1-7 describe the traffic and network set up. The arrival and termination of traffic flows are randomly done using Poisson distribution. Upon connection admission request the admission control will be invoked depending on class type the appropriate procedure will trigger. For the EF traffic class it cannot borrow but will preempt AF and BE traffic class by preempting the longest existing flow first as shown in lines 10-16. On the other hand, the AF traffic class can borrow from EF traffic class and preempt longer existing AF flows or BE flows as in lines 17-25. Lines 26-32 show the admission control flow for BE admission request. BE traffic class can

borrow from EF and AF traffic class. However, it cannot preempt any other flows and is rejected if it has no available bandwidth and no bandwidth to borrow.

Fig. 2: Access Bandwidth Allocation

Table 1: Bandwidth Range

% AF PHB traffic % BE PHB traffic % EF PHB

traffic Min Max Min Max

40 40 80 20 100

IV. SIMULATION STUDY

A simulation study was carried out for the proposed bandwidth sharing scheme based on the network model of Fig. 1 using ns-2 network simulator [11]. There are 11 nodes in the network, i.e. 3 sources, 3 sinks and 5 MPLS nodes. The bandwidth between links is 3Mbps. Each of the admitted traffic would be assigned an LSP based on class. The LSP will be based on peak rate value for EF traffic, adaptive rate for AF traffic and mean bandwidth for BE traffic.

The performance metric used in evaluating the bandwidth sharing scheme is the blocking probability at the ingress node. The blocking probability is obtained for various offered traffic load by dividing the sum of rejected flows over total number of admission request. An offered traffic load is a measure obtained by dividing the mean traffic arrival rate,λ, over the mean service rate, µ, i.e. λ/µ Erlang. The mean service rate can be obtained by taking the inverse of the mean holding or service time. The basic traffic descriptions are given in Table 2. All simulation runs take 3000s to completely eliminate any transient effect as suggested by [12]. The network model for

EF 40% BE 100% AF 80% Available Access Bandwidth

(4)

the simulation work was validated using Erlang loss formula [13].

Table 2 Traffic Description

To study the performance of the fuzzy regulator, the EF premium traffic is first gradually admitted to the DiffServ-aware MPLS network from a smaller to a bigger offered traffic load, i.e. 1 to 9 Erlangs, whereas the assured and best effort offered traffic load are fixed with 50 Erlangs respectively. Then, the simulation work is repeated but with the AF traffic varied from 1 to 9 Erlangs and the EF and BE traffic are held constant at 50 Erlangs. This is then again repeated with BE traffic class varied from 1 to 9 Erlangs and the AF and BE class held constant at 50 Erlangs. The mean arrival rates and mean holding time used are given in Table 3. The values are arbitrarily chosen but they provide heavy load conditions where borrowing exists so that preemption is relevant.

Once a connection is accepted, it would be assigned an LSP using shortest path route. The simulation work comprise of RDM admission control investigation with and without preemption. When it is without preemption, only borrowing is allowed whereas with preemption the higher priority traffic can claim back its allocated bandwidth by rerouting and resizing the lower priority traffic based on earliest flow first.

The RDM without preemption pseudocode is as follows:

1 Set up simulation time= 3000s 2 Set traffic load

3 Set random arrival rate with mean average required

4 Set random traffic life time with mean average required

5 Configure network topology and traffic parameters

6 Continuously start EF, AF and BE sources according to their arrival rates

7 Continuously terminate sources according to their life time

8 Invoke the following admission control process when a source start-time is invoked:

9 For new EF connection:

10 Proc new_traffic_EF

11 Calculate EF available bandwidth 12 Check if EF_new_connection_bw <= available_EF_bw 13 If yes accept_EF_connection 14 else 15 if EF_new_connection_bw > available_EF_bw 16 reject_EF_connection

17 For new AF connection

18 Proc new_traffic_AF

19 Calculate EF and AF available bandwidth 20 Check if AF_new_connection_bw <= available_AF_bw 21 If yes accept_AF_connection 22 else 23 if AF_new_connection_bw > available_∑EF+AF_bw 24 reject_AF_connection

25 For new BE connection

26 Proc new_traffic_BE

27 Calculate EF,BE and AF available bandwidth 28 Check if BE_new_connection_bw <= available_∑EF+AF+BE_bw 29 If yes accept_BE_connection 30 else 31 if BE_new_connection_bw > available_∑EF+AF+BE _bw 32 reject_BE_connection

The traffic and network set up are similar to ones with preemption, i.e. lines 1-7, and the arrival and termination of traffic flows are also randomly done using Poisson distribution. Similarly also, at the beginning of a connection arrival, admission control will take place and the appropriate procedure will trigger based on connection class type. For the EF traffic class it cannot borrow and cannot preempt as in lines 10-16. The AF traffic class can borrow from EF traffic class but cannot preempt any flows 17-24. Again BE traffic class can borrow from EF and AF traffic class but cannot preempt any flows as illustrated in lines 25-32.

Fig. 3 shows the blocking performance when the EF traffic is gradually increased from 1 to 9 Erlangs, and AF and BE traffic are held constant at 50 Erlangs. The EF with RDM preemption (EF-P) has 20-30% better blocking compared to EF without RDM preemption (EF-NP). The AF with RDM preemption (AF-P) also shows better blocking performance. by about 5%. For BE traffic class no blocking were experienced for the simulation with and without preemption since BE traffic can borrow bandwidth and would be rerouted when preempted.

Fig. 4 provides the blocking performance when the AF traffic is gradually increased from 1 to 9 Erlangs, and EF and

(5)

BE traffic is held constant at 50 Erlangs. The EF-P with a constant 50 Erlangs traffic has about 5% better blocking compared to EF-NP. As the AF traffic is being increased we see that blocking increases. The AF with RDM preemption (AF-P) shows 20% better blocking performance. Again for BE traffic no blocking were experienced for the simulation with and without preemption since BE traffic can borrow bandwidth and would be rerouted when preempted.

Fig. 4 provides the blocking performance when the AF traffic is gradually increased from 1 to 9 Erlangs, and EF and BE traffic is held constant at 50 Erlangs. The EF-P with a constant 50 Erlangs traffic has about 5% better blocking compared to EF-NP. As the AF traffic is being increased we see that blocking increases. The AF with RDM preemption (AF-P) shows 20% better blocking performance. Again for BE traffic no blocking were experienced for the simulation with and without preemption since BE traffic can borrow bandwidth and would be rerouted when preempted.

Fig. 5 provides the blocking performance when the BE traffic is gradually increased from 1 to 9 Erlangs, and EF and AF traffic is held constant at 50 Erlangs. No difference is detected in the EF-P and EF-NP blockings when BE traffic is increased. This is because the increased in the BE traffic class does not effect the constant load EF flows. The AF traffic provides significant lowering of the blocking probability of about 80% at 9 Erlangs when BE traffic is increased. The BE with no preemption (BE-NP) blocking is seen to increase as the BE offered load increases whereas the BE with preemption (BE-P) experienced almost no blocking. This is again attributed to the BE-P being able to borrow and preempted through rerouting.

0.0 0.2 0.4 0.6 0.8 1.0 1 3 5 7 9

EF Offered Traffic (Erlang)

B loc ki ng EF-NP EF-P AF-NP AF-P BE-NP BE-P

Fig. 3 Blocking Probability when EF Offered Traffic is increased

0.0 0.2 0.4 0.6 0.8 1.0 1 3 5 7 9

AF Offered Traffic (Erlang)

B loc k ing EF-NP EF-P AF-NP AF-P BE-NP BE-P

Fig. 4 Blocking Probability when AF Offered Traffic is increased

0.0 0.2 0.4 0.6 0.8 1.0 1 3 5 7 9

BE Offered Traffic (Erlang)

B loc ki ng EF-NP EF-P AF-NP AF-P BE-NP BE-P

Fig. 5 Blocking Probability when EF Offered Traffic is increased

V. CONCLUSISON

This paper looks at bandwidth sharing in the DiffServ-aware MPLS network based on the RDM model. Bandwidth borrowing is allowed by the lower priority traffic and limited by their respective cumulative bandwidth constraints whereas bandwidth preemption is based on preempting of higher to lower priority traffic. We propose that bandwidth preemption be based on resizing adaptive variable bit rate traffic and rerouting of best effort elastic Internet traffic. Results show that borrowing and preemption can form the basis of

(6)

bandwidth sharing in a DiffServ-aware MPLS network as oppose to have a borrowing only. We see equal or better performance, i.e. up to 80%, in blocking probability when the preemption scheme was in place.

REFERENCES

[1] F. L. Faucheur and W. Lai, "Requirements for Support of DiffServ-aware MPLS Traffic Engineering," RFC 3564, July 2003.

[2] F. L. Faucheur, L. Wu, B. Davie, S. Davari, P. Vaananen, R. Krishnan, P.Cheval, J. Heinanen, “MPLS Support of Differentiated Services”, RFC 3270, IETF, May 2002.

[3] E. Rosen, A. Viswanathan, and R. Callon, “Multiprotocol Label Switching Architecture”, RFC 3031, IETF, January 2001.

[4] S. Blake, D, Black, M. Carlson, E. Davies, Z. Wang, W. Weiss, “An Architecture for Differentiated Services“, RFC 2475, IETF, Dec. 1998.

[5] Faucher, F. L. and Lai, W, “Maximum Allocation Bandwidth Constraints Model for Diffserv-aware MPLS Traffic Engineering,” RFC 4125, IETF, June 2005.

[6] Ash, J, “Max Allocation with Reservation Bandwidth Constraints Model for Diffserv-aware MPLS Traffic Engineering & Performance Comparison,” RFC 4126, IETF, June 2005.

[7] F. L. Faucheur, “Russian Dolls Bandwidth Constraints Model for DiffServ-aware MPLS Traffic Engineering,” RFC 4127, June 2005. [8] Lai, W, “Bandwidth Constraints Models for Differentiated Services

(Diffserv)-aware MPLS Traffic Engineering: Performance Evaluation,” RFC 4128, IETF, June 2005.

[9] V. Jacobson, K. Nichols and K. Poduri, “An Expedited Forwarding PHB” RFC 2598, June 1999.

[10] J. Heinanen, F. Baker, W. Weiss and J. Wroclawski, “Assured Forwarding PHB Group”, RFC 2597, June 1999.

[11] ns-2 Network Simulator [Online]. Available http://www.isi.edu/nsnam/ns/.

[12] S. Jamin, P.B. Danzig, S. J. Shenker and L. Zhang, “A measurement-based admission control algorithm for integrated service packet networks,” IEEE/ACM Trans. Netw., vol. 5, no. 1, pp. 56 – 70, Feb. 1997.

[13] L. Kleinrock, Queuing System, Volume 1: Theory, New York: John Wiley, 1975.

References

Related documents

Lane analysis, particularly ego-lane analysis, comprises the multiple tasks related to the host lane: lane detection and estimation (LE), lane departure warning (LDW), lane

However, it should be noted that patients in the CRT group had more lymph node metastases, more advanced stage of disease and longer tumors at diagnosis compared to the

MAK Peter p.h.m.mak@pl.hanze.nl Wednesday 5th March Saturday 8th March Mornington Hotel VAN DEN ELSEN Harrie h.g.j.van.den.elsen@pl.hanze.nl Thursday 6th March Sunday 9th March

Hambatan didaktis disebabkan penggunaan strategi represen- tasi mendatar nilai tempat pada buku kurikulum 2013 kelas II kurang membantu siswa dalam memaknai bilangan dari sudut

Gains-inclusive income is NIPA disposable income plus net investment in government retirement accounts and household sector capital gains (see first line of table 6).. Inflation-

Here, we have combined a molecular genetic association study in humans with an in situ hybridization study in brains of rats submitted to an animal model of gambling behavior, the Rat

However, as we discuss later, we expect the vlibVMI interface to evolve over time to better exploit the fact that CloudVMI offers VMI capabilities across multiple VMs and