Network Management
Quality of Service I
Patrick J. Stockreisser
Lecture Outline
Basic Network Management (Recap)
Introduction to QoS
Packet Switched Networks (Recap)
Common QoS approaches
Best Effort Service
(Integrated Service)
(Differentiated Service)
Network Management
Network management means different things
to different people.
Network management is the execution of a
set of functions required for controlling,
planning, allocating, deploying, coordinating,
and monitoring the resources of a network.
Generally it is a service that employs a
variety of tools, applications, and devices to
assist human network managers in
History
Early 1980s
Great expansion in the area of network deployment.
Cost benefits
Productivity Gains
By mid-1980s,
Growing difficulties in management of the networks
A result of deploying many different (and sometimes incompatible) network technologies
By late-1990s
Wireless technologies emerge (standards agreed)
Network Management
Automation
The problems associated with network expansion
affect both day-to-day network operation
management and strategic network growth planning.
Each new network technology requires its own set of
experts.
In the early 1980s, the staffing requirements alone for
managing large, heterogeneous networks created a
crisis for many organizations.
An urgent need arose for automated network
Some Network Management
Functions
Security:
Ensuring that the network is protected from
unauthorized users.
Performance:
Eliminating bottlenecks in the network.
Reliability:
Making sure the network is reliable to users and
responding to hardware and software malfunctions.
Quality of Service
In computer networking, the traffic engineering
term Quality of Service (QoS) refers to
control mechanisms that can provide different
priority to different users or data flows, or
guarantee a certain level of performance to a
data flow in accordance with requests from
the application program.
An Analogy for
Quality of Service
Consider two competing cargo airlines (A and B)
operating out of New York JFK airport with a service
to London Heathrow.
Both airlines have the same aircraft, both airlines charge the
same rate per package shipped
Both airlines offer seven flights a week.
There is little to differentiate the service offered by
these two companies.
These competing airlines offer the same throughput
Same Service? Same QoS?
Imagine this:
Airline A offers one flight per day for each day of the week
Airline B offers all of its seven flights on a single day of the week.
So while both airlines provide the same throughput capacity over the weekly period, they differ greatly in the actual service provided.
Depending on your delivery needs these different airline service models
will succeed or fail in quite dramatic fashion.
Eg: an online real-time mail order business Eg: stock supply for a for a large warehouse
Obviously, the business requirements for delivery define which service
model works best.
Real-time (daily) demand needs a very regular and consistent service
Some other QoS definitions
“QoS is described in terms of a set of user perceived
characteristics of the performance of a service. It is
expressed in user-understandable language and
manifests itself as a number of parameters, all of
which have either a subjective or objective values”
“QoS refers to the capability of a network to provide
better service to selected network traffic over various
technologies”
“QoS is the collective effect of service performance
which determine the degree of satisfaction of a user
of the service”
The simplest of networks
Packets are sent between nodes.
NetworkNetwork Basics
Computers running on the
Internet communicate to each other using either the
Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP)
When you write programs that
communicate over a network, you are programming at the application layer. Typically, you don't need to concern yourself with the TCP and UDP layers.
Packet Review
A network breaks a message into parts of a certain size in bytes. These are the packets
Each packet carries the information that will help it get to its
destination:
the sender's IP address,
the intended receiver's IP address,
something that tells the network how many packets this e-mail
message has been broken into and
the number of this particular packet.
The packets carry the data in the protocols that the Internet
uses:
Transmission Control Protocol/Internet Protocol (TCP/IP).
Each packet contains part of the body of your message. A typical
packet contains perhaps 1,000 or 1,500 bytes.
(Packet) (frame) (block) (cell)
Packet Sending
Each packet is then sent off to its destination
by the best available route
A route that might be taken by all the other
packets in the message or by none of the
other packets in the message.
This makes the network more efficient.
Load balancing
Packet Structure
Most packets are split into three parts:
1. Header - The header contains instructions about the data carried by the packet. These
instructions may include:
Length of packet (some networks have fixed-length packets, while others rely on the header to
contain this information)
Synchronization (a few bits that help the packet match up to the network) Packet number (which packet this is in a sequence of packets)
Protocol (on networks that carry multiple types of information, the protocol defines what type of
packet is being transmitted: e-mail, Web page, streaming video)
Destination address (where the packet is going) Originating address (where the packet came from)
2. Payload - Also called the body or data of a packet. This is the actual data that the packet is
delivering to the destination. If a packet is fixed-length, then the payload may be padded with blank information to make it the right size.
3. Trailer - The trailer, sometimes called the footer, typically contains a couple of bits that tell the
receiving device that it has reached the end of the packet. It may also have some type of error checking. The most common error checking used in packets is Cyclic Redundancy Check (CRC). CRC is pretty neat. Here is how it works in certain computer networks: It takes the sum of all the 1s in the payload and adds them together. The result is stored as a hexadecimal value in the trailer. The receiving device adds up the 1s in the payload and compares the result to the value stored in the trailer. If the values match, the packet is good. But if the values do not match, the receiving device sends a request to the originating device to resend the packet.
Network Problems
Dropped packetsThe routers might fail to deliver (drop) some packets if they arrive when their buffers are already full. Some, none, or all of the packets might be dropped, depending on the state of the network, and it is impossible to determine what will happen in advance. The receiving application must ask for this information to be
retransmitted, possibly causing severe delays in the overall transmission.
Delay
It might take a long time for a packet to reach its destination, because it gets held up in long queues, or takes a less direct route to avoid congestion. Alternatively, it might follow a fast, direct route. Thus delay is very unpredictable.
Jitter
Packets from source will reach the destination with different delays. This variation in delay is known as jitter and can seriously affect the quality of streaming audio and/or video.
Out-of-order delivery
When a collection of related packets is routed through the Internet, different packets may take different routes, each resulting in a different delay. The result is that the packets arrive in a different order to the one with which they were sent. This problem necessitates special additional protocols responsible for
rearranging out-of-order packets to an isochronous state once they reach their destination. This is
especially important for video and VoIP streams where quality is dramatically impacted by both latency or lack of isochronicity.
Error
Sometimes packets are misdirected, or combined together, or corrupted, while en route. The receiver has to detect this and, just as if the packet was dropped, ask the sender to repeat itself.
Internet Problems
The Internet today does not make any promises
about QoS an application will receive.
An application will receive whatever level of performance
(e.g. end-to-end packet delay and loss) that the network is able to provide at that moment.
Delay-sensitive multimedia applications cannot
request any special treatment.
All packets are treated equal at the routers, including
delay-sensitive audio and video packets.
Network congestion (or interfering traffic) can
severally limit the performance of an application
(especially audio-video streaming, Multimedia, VoIP and
QoS Question
Hence, what new architectural components
can be added to the Internet architecture to
shield an application from such congestion
and thus make high-quality networked
multimedia applications a reality?
Guarantees for:
high bandwidth
low latency
Common QoS Approaches
Best Effort Services
(no guarantees)
The network does not provide any guarantees that
data is delivered or that a user is given a guaranteed
quality of service level or a certain priority.
Integrated Services
(resource reservation)
The network resources are assigned according to the
application QoS request and subject to the bandwidth
management policy
Differentiated Services
(prioritisation)
Network traffic is classified and network elements
give preferential treatment to classifications identified
having more demanding requirements.
Best Effort
In a best effort network all users obtain best effort service Each service obtains variable bit rate and delay.
Removing features such as recovery of lost or corrupted data and
pre allocation of resources, the network operates more efficiently, and the network nodes are inexpensive.
Application sends data whenever it feels like, as much as it feels like
without requiring any permission.
Network elements try their best to deliver the packets to the
destination without any bounds on delay, latency, jitter, etc.
Network elements can give up to deliver without informing either the
sender or the receiver.
Conventional IP routers only provide best-effort service. The
simplicity of routers is a key factor why IP has been much more successful than more complex protocols such as X.25 and ATM.
Best-Effort
Post Office Analogy
The post office service delivers letters using a best effort delivery
approach.
The delivery of a certain letter is not scheduled in advance
no resources are pre allocated by the post office
The postman will make his "best effort" to try to deliver a
message but may be delayed if:
All of a sudden, too many letters arrive at the post office The postal address is incomplete
The postman’s van breaks down
The sender is not informed if a letter has been delivered
successfully.
However, the sender can pay extra for a delivery confirmation
receipt,
This requires that the carrier get a signature from the recipient to
Scenario Example:
Consider two hosts H1, H2
sending packets via router R1
to R2, and subsequently to
hosts H3 and H4
Let us assume the LAN speeds
are significantly higher than 1.5
Mbps, and focus on the output
queue of router R1;
Packet Delay
• Packet delay and packet loss will occur if the
aggregate sending rate of the H1 and H2 exceeds 1.5
Mbps.
QoS: Four Principles
To tackle the problems which may arise in such
scenarios we will look at 4 QoS Principles:
Packet Classification
Isolation Scheduling and Policing
High Resource Utilisation
Another Problem
A 1 Mbps audio application (e.g. a
CD-quality audio call) shares the 1.5 Mbps link between R1 and R2 with an FTP application that is
transferring a file from H2 to H4
In the best-effort Internet, the audio
and FTP packets are mixed in the output queue R1 and (typically) transmitted in a first-in-first-out (FIFO) order
Burst of packets from FTP source
could potentially fill up the queue, causing IP audio packets to be
excessively delayed or lost to buffer overflow at R1
Packet Marking
A solution is to give priority to audio packets,
as FTP does not have timing constraints –
hence the notion of distinguishing the types
of packets via Traffic Class field in IPv6.
Principle 1:
Packet marking allows a router to distinguish among
packets belonging to different classes or traffic.
Another Scenario
Imagine the FTP user has
purchased “platinum service”
(i.e. high priced) Internet
access from its ISP, while the
audio user has purchased a
cheap, low-budget Internet
service.
Should the cheap user’s audio
packets be given priority over
FTP packets in this case?
Packet Classification
A more reasonable solution is to distinguish packets on the basis of
the sender’s IP address.
More generally, we see that it is necessary for a router to classify
packets according to some criteria.
A router must be able to distinguish between packets according to a
“policy” decision
One way to achieve this is through marking the packets, however,
this does not mandate that a certain QoS will be given.
Principle 1 (new):
Packet classification allows router to distinguish among
packets belonging to different classes of traffic.
Another Scenario
Suppose the outer knows it should give priority to
packets from the 1 Mbps audio application
Since outgoing link is 1.5 Mbps, FTP packets receive lower
priority, they will still, on average, receive 0.5 Mbps of transmission service
Suppose audio applications starts transmitting at
greater than 1.5 Mbps (link capacity)
This may lead to starvation of FTP packets
Similarly for multiple audio applications sharing a link
Flow Isolation
There is a need for a degree of isolation
among flows, in order to protect one flow
from another misbehaving flow
Principle 2:
It is desirable to provide a degree of isolation among
traffic flows, so that one flow is not adversely affected by
another misbehaving flow
Policing
Policing Mechanism: A monitoring
(policing) mechanism put in place to ensure that traffic flows meet some pre-defined criteria
If a policed application misbehaves, the
policing mechanism will take some action e.g., drop or delay packets that are in
violation of the criteria so that the traffic actually entering the network conforms to the criteria
Packet classification and marking
mechanism (principle 1) and the policing mechanism (principle 2) are co-located at the “edge” of the network, either in the end system, or at an edge route.
Bandwidth Enforcement
Traffic isolation can also be
achieved by the link level protocol
providing fixed bandwidth to each
application flow
Audio - 1 Mbps FTP - 0.5 Mbps
Here, audio and FTP flows see a
logical link with capacity 1.0 and
0.5 Mbps, respectively
Enforcement Issues
When bandwidth is enforced, a given flow cannot use bandwidth
not being used by another application (it can only use a maximum of its own limit)
For example, if the audio flow goes silent (e.g., if the speaker
pauses and generates no audio packets), the FTP flow would still not be able to transmit more than 0.5 Mbps over the R1-to-R2 link – this is clearly wasteful
Principle 3:
While providing isolation among flows, it is desirable to
use resources (e.g., link bandwidth and buffers) as
Another Example
Consider two competing applications
transmitting at 1 Mbps, with a link capacity (R1-to-R2) of 1.5 Mbps
In this case the combined rate for the two
applications is 2 Mbps (higher than link capacity)
No marking, isolation or classification will
help solve this problem
Each app gets 0.75 Mbps of link (half of
link capacity)
Each app gets 25% packet loss
This quality is unacceptable
So its better not to transmit any packet at
QoS Guarantees
The network should provide the minimum
quality of service to enable an application to
run, or block the application – example
call-blocking on a telephone network (where
end-to-end quality of service is necessary)
Hence, in the previous case either the
minimum QoS is guaranteed, or the
application is stopped, as it would not be
usable.
QoS Requirements
Implicit with the need to provide a guaranteed QoS to a flow is
the need for the flow to declare its QoS requirements
This process of having a flow declare its QoS requirement, and
then having the network either accept the flow (at the required QoS) or block the flow (because the resources needed to meet the declared QoS requirements can not be provided) is referred to as the call admission process
Principle 4:
A call admission process is needed in which flows declare
their QoS requirements and are then either admitted to
the network (at the required QoS) or blocked from the
network (if required QoS can not be provided by the
Scheduling and Policing
Mechanisms
Packets from various sources are multiplexed together and
queue for transmission at the output buffer of a link
A Link Scheduling Policy determines how these packets are then
selected for transmission
The plays an important role in providing QoS guarantees
First-In-First-Out: Packets arriving at link output queue are
buffered if link is busy transmitting. If not sufficient buffering space, then invoke a Packet Discarding Policy.
In FIFO policy, packet departure from buffer is based on time of
arrival – first packet to arrive is the first to leave
Packet Discarding Policy: determines whether packets will be
dropped (lost) when queue is full – can be based on removing already buffered packets
Scheduling Priority
and Round Robin
Priority: packets arriving at output link are classified into one of two
more priority classes – priority value is based on information carried in packet header (such as the Traffic Class field in IPv6)
A different queue is maintained for each priority class – with the
highest priority queue given preference when transmitting packets.
Round Robin: assumes existence of queues, but a round robin
scheduler alternated service between classes. A “work-conserving” scheduler based on the round-robin strategy will keep the link busy, always checking for low priority (class) packets when the high
priority (class) queue is empty
Weighted Fair Queuing (WFQ): similar to round robin, except that
each class may receive a differentiated amount of service in any interval of time. Each class i is assigned a weight wi
Weighted Fair Queuing
In WFQ, during any interval of time, if there are class i
packets to send, class i will be guaranteed to receive a
fraction of service equal to:
w
i/Σw
iwhere the sum in the denominator is taken over all
classes that also have packets queued for transmission
Alternatively, we can say that with WFQ queues, a link
with transmission rate R, class i will always achieve a
throughput of at least
R x w
i/ Σ w
iPolicing
Policing is used to regulate the rate at which packets can be
inserted into a network
An important part of a QoS architecture
Three important policing criteria:
Average rate: limit the long-term average rate (packets per time
period) at which a flow’s packets can be sent into the network. Must determine time interval over which the average value is calculated. For instance, average rate of 100 packets/sec is more constrained than 6000 packets/minute
Peak rate: constrains the maximum number of packets that can
be sent over relatively short period of time (compared to average rate)
Leaky Bucket
A Leaky Bucket algorithm can be used to
characterise these policing limits
It consists of a “bucket” that can hold up to b
tokens – which determines the burst size
New tokens added to bucket at r tokens/sec
if bucket is full, a newly generated token is
ignored
Max number of packets that can enter the
network within any time interval t, is rt+b
Can use multiple Leaky buckets in series
Leaky Bucket Analogy
The task: to maintain a high level of channel utilization in the router, while restricting the dropping probability to 1% or below
The task: to maintain a high water level in the bucket, while restricting the overflow rate to 1% or below
Dropped Packets (terminated in the middle of a packet flow, service very annoying*)
Water drops overflowing
Hand over packets (switching from a neighbouring wireless port)
Water from an extra pipe
Blocked Packets (terminated before the service starts)
Water drops prevented from entering the bucket by the tap
Packets Sending Completed Water drops leaking from the bottom
New Packet to Send Water drops from the tap
Bandwidth Capacity Bucket size
Wireless networks Leaky bucket
Lecture Review
In this lecture we have:
Looked at the primary principles behind QoS
Looked at the existing best effort approach
A brief look at scheduling and policing
techniques