• No results found

EFFECTIVELY MANAGING WAN, INTERNET ACCESS LINK AND APPLICATION TRAFFIC

N/A
N/A
Protected

Academic year: 2021

Share "EFFECTIVELY MANAGING WAN, INTERNET ACCESS LINK AND APPLICATION TRAFFIC"

Copied!
18
0
0

Loading.... (view fulltext now)

Full text

(1)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

In the battle for bandwidth on congested WAN and Internet access links, demanding applications, such as video, mobile

devices or social media, can flood capacity, undermine the performance of critical applications and cause poor user

experience. Abundant data, devices and protocols that swell to consume all available bandwidth, network bottlenecks, and

new, popular applications – they all seem to conspire against critical application performance.

Identifying performance problems is a good first step, but it’s not enough. PacketShaper solves performance problems by

controlling bandwidth allocation with flexible policies to protect critical applications, limit greedy and recreational traffic, and

block malicious activities.

Bandwidth minimums and/or maximums can be applied to each application, session, user, and/or location. Each type of

traffic maps to a specific bandwidth allocation policy, ensuring that each receives an appropriate slice of bandwidth.

This paper describes common application performance problems, proposes a few alternative solutions, and then delves

into detail about Blue Coat’s control features.

The Performance Problem

Changes in devices, contents, applications and network environments have wreaked havoc on performance.

Increasing traffic, diverse performance requirements, and capacity mismatch between local and wide-area networks have prompted the decline. Traffic growth stems from trends in applications, networks, and users behaviors:

More application traffic: An explosion of application size, user

demand, and richness of media

More mobile devices: As businesses embrace new technology

and user trends, supporting Bring Your Own Device (BYOD) in the enterprise network is becoming a commonly accepted and standard practice

Recreational traffic: Abundant traffic resulting from trends in web

and application based traffic: video streaming (e.g. YouTube, NetFlix), social media (e.g. Facebook, Twitter), web browsing, interactive gaming, and more

Web-based applications: Applications with a web-based user

interface can consume 5 to 10 times more bandwidth than thick clients

Cloud and SaaS applications: Enterprise applications that run over

the WAN or Internet instead of being confined to a single machine • Datacenter consolidation: A trend to combine datacenters and

reduce the number of application servers, forcing previously local traffic (high bandwidth, low latency, and low cost) to traverse the WAN or Internet (low bandwidth, high latency, and expensive)

Voice/video/data network convergence: One network that supports

voice, video, and data with their variety in bandwidth demands and performance requirements

SNA/IP convergence: An IP network that supports SNA applications

using TN3270 or TN5250; without SNA networks’ controls, legacy applications usually suffer a drop in performance

Disaster readiness: Redundant datacenters, mirroring large amounts

of data

Security: Viruses, Phishing, Advanced Persistent Threats (APT)

and denial-of-service (DoS) attacks through encrypted and unencrypted traffic

New habits: Users doing more types of tasks online – shopping,

research, news, collaboration, finances, socializing, medical diagnostics, and more

(2)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

The Nature of Network Traffic

The de-facto network standard is the TCP/IP protocol suite, and over 80 percent of TCP/IP traffic is TCP. Although TCP offers many advantages and strengths, management and enforcement of QoS (quality of service) are not among them.

Many of TCP’s own control or reliability features contribute to performance problems:

TCP retransmits when the network cloud drops packets or delays acknowledgments

When packets drop or acknowledgements are delayed due to congested conditions and overflowing router queues, retransmissions contribute to more traffic and exacerbate the original problem. • TCP increases bandwidth demands exponentially

With TCP’s slow-start algorithm, senders can iteratively double the transmission size until packets drop and problems occur. The algorithm introduces an exponential growth rate and can rapidly dominate capacity. Without regard for traffic urgency, concurrent users, or competing applications, TCP simply expands each flow’s usage until it causes problems. This turns each sizeable traffic flow into a bandwidth-hungry, potentially destructive consumer that could undermine equitable or appropriate allocation of network resources. • TCP imposes network overload

TCP expands allocation until packets are dropped or responses are delayed. It floods routers by design!

As large amounts of data are forwarded to routers, more congestion forms, bigger queues form, more delay is introduced, more packets are discarded, more timeouts occur, more retransmissions are sent, more congestion forms…and the cyclical spiral continues.

When demand rises and large packet bursts set this domino effect in motion, all traffic experiences delays – large or small, interactive or batch, urgent or frivolous. But critical or urgent applications (SAP or web conferencing, for example) suffer the most. User experience will degrade. Productivity deteriorates. Business declines.

Solution Alternatives

When faced with bandwidth constraints and unpredictable application performance, a number of solutions come to mind. This section addresses the following potential solutions, focusing on their advantages and limitations:

• Management decrees

• Additional bandwidth and compression • Packet marking and MPLS

• Queuing-only schemes on routers or other networking equipment • Blue Coat’s application traffic management

Management Decrees

A university says, “Don’t use P2P applications.” Or a corporation says, “Don’t watch YouTube videos in your office.” Managerial edicts are only as effective as an organization’s ability to enforce them. In addition, this approach only impacts the network load that is due to unsanctioned traffic. It does nothing to manage concurrent file transfers, cloud applications, large email attachments, Citrix-based applications, print traffic, and all the other traffic that is both necessary and important. Real-world traffic has an incredible variety of requirements that complicates the task of enforcing appropriate performance for all.

Additional Bandwidth and Compression

When performance problems occur, a common response to network congestion is buying more bandwidth. But an upgrade is not an effective solution. Too often, network managers spend large portions of their limited budgets on bandwidth upgrades in an attempt to resolve performance problems, only to find that the additional bandwidth is quickly consumed by recreational traffic and performance problem of their critical business applications persist. Quite often, critical and poorly performing applications aren’t necessarily the applications that gain access to extra capacity. Usually, it’s less urgent, bandwidth-intensive applications that monopolize the added resources.

In this illustration, more bandwidth is added, but the beneficiaries are top bandwidth consumers (web browsing, email, music downloads) instead of the most critical applications (Oracle, Citrix, TN3270). If usage

(3)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

patterns perpetuate after a bandwidth upgrade (as they usually do), critical applications will continue to lose out to the more aggressive and less important traffic.

Bandwidth upgrades impose setup costs and increased ongoing operating costs. In some places, especially in remote locations, larger pipes are not available or are extremely expensive. Even if bandwidth costs drop, they remain a recurring monthly cost. According to the Gartner Group “The WAN represents the single largest recurring cost, other than people, in IS organizations.”

The same challenge exists when organizations turn to compression-only solutions that lack application-aware and control features. Without proper identification and management, compression’s bandwidth gains will most likely enhance the wrong applications.

Queuing-Only Schemes on Routers or other Networking Equipment

Routers provide queuing technology that buffers waiting packets on a congested network. A variety of queuing schemes, including weighted fair queuing, priority output queuing, and custom queuing, attempt to prioritize and distribute bandwidth to individual data flows so that low-volume applications don’t get overtaken by large transfers.

Router-based, queuing-only solutions have improved. For example, they can now enforce per traffic type aggregate bandwidth rates for any traffic type they can differentiate. But a variety of router and queuing limitations remain:

• Routers manage bandwidth passively, discarding packets and providing no direct feedback to end systems. Routers use queuing (buffering and waiting) or packet tossing to control traffic sources and their rates.

• Queues, by their definition, oblige traffic to wait in lines and add delay to transaction time. Dropping packets is even worse for TCP applications since it forces the application to wait for a timeout and then retransmit.

• Queues do not proactively control the rate at which traffic enters the wide-area network at the other edge of a connection.

• Queuing-based solutions are not bi-directional and do not control the rate at which traffic travels from a WAN to a LAN, where there is no queue.

• Routers can’t enforce per-flow minimum or maximum bandwidth rates. • Routers don’t allow traffic to expand beyond bandwidth limits when

congestion and competing traffic are not issues.

• Routers don’t enable distinct strategies for high-speed and low-speed connections.

• Routers don’t allow for a maximum number of allowed flows for a given traffic type or a given sender to be specified.

• Queuing addresses a problem only after congestion has occurred. It’s an after-the-fact approach to a real-time problem.

• Queuing schemes can be difficult to configure.

• Routers don’t have the ability to assess the performance their queuing delivers.

• Traffic classification is too coarse and overly dependent on port matching and IP addresses. Routers can’t automatically detect and identify many applications as they pass. They can’t identify non-IP traffic, much VoIP traffic, peer-to-peer traffic, games, HTTP on non-standard ports, non-HTTP traffic on port 80, and other types of traffic. Their inability to distinguish traffic severely limits their ability to control it appropriately.

Queuing is a good tactic, and one that should be incorporated into any reasonable performance solution. But stand alone, it is not an effective solution. Although routers don’t identify large numbers of traffic types

(4)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

or enforce a variety of flexible allocation strategies, a strong case could be made that they shouldn’t. The first and primary function of a router is to route. Similarly, although a router has some traffic-blocking features, it doesn’t function as a complete firewall. And it shouldn’t. It needs to focus its processing power on prompt, efficient routing responsibilities.

Packet Marking

Packet marking is another popular method that ensures speedy treatment across the WAN and across heterogeneous network devices. A variety of standards have evolved over time. First, CoS/ToS (class and type of service bits) were incorporated into IP. Then, Diffserv became the newer marking protocol for uniform quality of service (QoS), essentially the same as ToS bits, just more of them. MPLS is another standard that integrates the ability to specify a network path with class of service for consistent QoS.

The advantages of packet marking are clear. It is proactive and does not wait until a problem occurs before taking action. It is an industry-standard system that different equipment from different vendors all incorporate, ensuring consistent treatment. But, as with queuing, it doesn’t stand alone as an effective solution, as it:

• Needs assistance to differentiate types of traffic/applications so that the proper distinguishing markers can be applied

• Lacks control over the rate at which packets enter the WAN • Cannot apply explicit bandwidth minimums and maximums

• Doesn’t control the number of allowed flows for a given type of traffic or a given sender

• Needs another solution to detect low- and high-speed connections, although it can then implement appropriate treatment for each With assistance, packet marking can contribute to excellent performance control.

Blue Coat’s Application Traffic Management

PacketShaper offers a broad spectrum of tools and technologies to control performance. They include explicit bits-per-second minimum and maximum bandwidth rates, relative priorities, the ability to precisely target the right traffic, both inbound and outbound control, and features to address the deficits listed in the sections on queuing and packet marking, making them into complete performance solutions. Together, these and other capabilities form the PacketShaper’s application traffic management system.

With PacketShaper, you can:

• Protect the performance of important applications, such as SAP and Oracle

• Prioritize and protect important Cloud/SaaS applications such as Office 365 and SalesForce.com

• Contain unsanctioned and recreational traffic, such as YouTube and Facebook

• Provision steady streams for real-time applications such as voice or video traffic to ensure optimized user experience

• Stop undesirable applications or users from monopolizing the link • Reserve or cap bandwidth using an explicit rate, percentage of

capacity, or priority

• Detect attacks and limit their impact

• Balance applications, such as Microsoft® Exchange, that are both bandwidth-hungry and critically important, to deliver prompt performance with minimal impact

• Allow immediate passage for small, delay-sensitive traffic such as Telnet

• Provision bandwidth equitably between multiple locations, groups, or users

• Monitor conditions of interest, then, when thresholds are crossed, automatically take action to correct, document, and/or notify someone of the problem

(5)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

Controlled Passage

PacketShaper includes features to specify bandwidth maximums and/or minimums to one or a group of applications, sessions, users, locations, streams, and other traffic subsets. The PacketShaper divides your network traffic into classes. By default, it categorizes passing traffic into a separate class for each application, service, or protocol, but you can specify a lot of other criteria to separate traffic by whatever scheme you deem as appropriate.

Traffic classes are extremely important, because the PacketShaper applies control on a class-by-class basis. The PacketShaper can also apply its control features to other subsets of traffic besides classes, such as each user’s traffic or each session’s traffic. The traffic class is your most powerful tool to target your control strategies to the precise traffic you want without influencing the traffic you don’t want. For

example, you can control the subset of traffic that matches: Oracle running on Citrix MetaFrame with an MPLS path label of 5 destined for the London office.

Per-Class Limits and/or Reservations

Your network probably supports several applications that might be important, but are not urgently time sensitive. As explained earlier in “The Nature of Network Traffic,” when these applications exhibit bursty, bandwidth-greedy behavior, trouble starts. Bandwidth-starved, critical applications suffer sluggish performance and become the losers in the fight for bandwidth in bottlenecks at WAN or Internet links.

PacketShaper Classifies and Manages

Traffic by Type

WebPulse Classification Service Updates

Traffic Types

WAN

Protect Education Cloud Contain Recreational Fairly Allocate Guest Users Assure Voice & Video Classroom Delivery

A PacketShaper partition creates a virtual separate pipe for a traffic class. A partition is appropriate when you want to limit a greedy application or when you want to protect a vulnerable but critical application. It contains or protects (or both) all traffic in one class as a whole.

Graphs comparing usage and efficiency, before and after using features in PacketShaper.

(6)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

PARTITION USAGE EXAMPLES

PROBLEM SOLUTION BEHAVIOR

SAP performance at a T1 bench office is terrible. Partition size = 250Kbps; burstable; limit–none A partition on SAP traffic reserves about a sixth of the link for SAP and allows SAP to use the whole link it is available.

If SAP is active, it gets bandwidth, period. No matter how much other traffic is also active, SAP gets all the bandwidth it needs – up to 250 Kbps.

If SAP needs more than 250 Kbps, it gets a pro-rated share of other available bandwidth. If SAP needs less than 250 Kbps, it loans the unused portion to other applications.

BYOD usage generates huge amount of traffic for WAN and Internet connections. In addition to content consumptions, OS and application downloads/ updates (e.g. 2GB iOS8 upgrade) create huge traffic spikes that can easily overwhelm enterprise networks.

Partition size = 50Kpbs; priority = 0; burstable; limit = 5%

A partition on mobile OS (iOS, Android) traffic with acceptable bandwidth reserve and allows bursting of up to 5% of capacity when extra bandwidth is available.

When more important traffic needs bandwidth, mobile OS updating is limited to 50Kbps. When no other higher priority traffic is present, iOS or Andriod updates can burst, but take no more than 5 percent of total network capacity.

Recreational video and audio streaming can sometimes swamp a company’s network. Although wanting to avoid an outright ban, management doesn’t want employees depending on the company network for abundant speedy streams.

Partition size = 0; burstable; limit = 5%

A partition on streaming media traffic reserves no bandwidth but allows streaming to take up to 5 percent of capacity.

When more important traffic needs bandwidth, recreational streaming media gets none. Even when there are no other takers, streaming can access only 5 percent of capacity.

Microsoft Exchange is vitally important to an organization and needs definite bandwidth to work effectively. However, the organization’s other applications are suffering as Exchange can tend toward bandwidth-greedy habits.

Partition size = 25%; burstable; limit = 65% A partition on Exchange traffic both contains and protects.

Exchange always performs adequately because it always has access to 25 percent of capacity no matter what other traffic is present. If Exchange needs more, it gets a pro-rated share of remaining bandwidth – up to 65 percent of capacity. Exchange never takes over the network.

If Exchange needs less than 25 percent, it loans the unused portion to other applications.

You specify the size of a partition’s private link, designate whether it can expand or burst, and optionally cap its growth. You can define partitions using explicit bandwidth rates or percentages of capacity. Partitions do not waste bandwidth, as they always share their unused excess bandwidth with other traffic.

As traffic flows, the PacketShaper allocates bandwidth for partitions’ minimum sizes and other bandwidth guarantees first. After that, remaining bandwidth is divided up. If allowed to burst, a partition gets a pro-rated share of this remaining bandwidth, subject to the partition limit and the traffic’s priority (indicated in policies, coming later).

Variations on the Partition Theme

Two variations on the partition theme are of particular interest:

hierarchical partitions and dynamic partitions. Hierarchical partitions are embedded in larger, parent partitions. They enable you to carve a large bandwidth allotment into managed subsets. For example, you could reserve 40 percent of your link capacity for applications running over Citrix, and then reserve portions of that 40 percent for each application running over Citrix – perhaps half for Oracle and a quarter each for Great Plains and Sales Logix.

(7)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

Dynamic partitions are per-user partitions that manage each user’s bandwidth allocation across one or more applications. In addition, dynamic partitions can be created for a group of users within an IP address range.

Dynamic partitions are useful for situations when you care more about equitable bandwidth allocation than about how it’s put to use (such as in a guest Wi-Fi network). Dynamic partitions are created as users initiate traffic of a given class. When the maximum number of dynamic partitions is reached, an inactive slot (if there is one) is released for each new active user. Otherwise, you choose whether latecomers are refused or squeezed into an overflow area. Dynamic partitions greatly simplify administrative overhead and allow over-subscription.

For example, a university can give each dormitory student a minimum of 20 Kbps and a maximum of 60 Kbps to use in any way the student wishes. Or a business can protect and/or cap bandwidth for distinct departments (accounting, human resources, marketing, and so on). As always, the PacketShaper lends any unused bandwidth to others in need.

Per-Session Rate Policies

Many applications need to be managed on a flow-by-flow basis rather than as a combined whole. Per-session control enables many benefits. Control policies:

• Time connections’ exchanges to minimize time-outs and retransmissions and maximize throughput

• Prevent a single session or user from monopolizing bandwidth • Allocate precisely the rate that streaming traffic needs to avoid jitter

and ensure good reception

Rate policies can deliver a minimum rate for each individual session of a traffic class, allow that session prioritized access to excess bandwidth, and set a limit on the total bandwidth the session can use. A policy can keep greedy traffic in line or can protect latency-sensitive sessions by providing the minimum bandwidth or priorities they need. As with partitions, any unused bandwidth is automatically lent to other applications.

For example, VoIP (Voice over IP) can be a convenient and cost-saving option, but only if it consistently delivers good service and user experience. When delay-sensitive voice traffic traverses congested WAN

(8)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

links on a shared network, it can encounter delay, jitter or packet loss which results in poor voice quality. Each flow requires a guaranteed minimum rate or the service is unusable. After all, a voice stream that randomly speeds up and slows down as packets arrive in clumps is not likely to attain wide commercial acceptance. Voice traffic needs a per-session guarantee to ensure good reception.

All types of streaming media (such as distance learning, NetMeeting, Flash, QuickTime, StreamWorks, SHOUTcast, WindowsMedia, and WebEx) can benefit from rate policies with per session minimums to secure good performance. Many thin-client or server-based applications also benefit from per-session minimums to ensure smooth performance. Print traffic, emails with large attachments, and file transfers are all examples of bandwidth-greedy traffic that would benefit from rate policies, but without a guaranteed minimum, with a lower priority than that for critical traffic, and optionally with a bandwidth limit.

To see how a per-session bandwidth limit might be useful, consider an organization with abundant file transfers. Although necessary and important, the file transfers aren’t urgent and do tend to overtake all capacity.

Now suppose someone who is equipped with a T3 initiates a file transfer. Assume a partition is in place and keeps the aggregate total of all transfer traffic in line. The one high-capacity user could dominate the entire FTP partition, leaving other potential FTP users without resources. Because a partition applies only to the aggregate total of a traffic class, individual users would still be in a free-for-all situation. A rate policy that caps each FTP session at 100 Kbps, or any appropriate amount, would keep downloads equitable.

Admission Control

What happens when so many users swamp a service that it can’t accommodate the number and maintain good performance? Without a PacketShaper, performance would degrade for everyone. What options are there? You could:

• Deny access to the service once existing users consume all available resources

• Keep latecomers waiting for the next available slot with just enough bandwidth to string them along

• For web services, redirect latecomers to an alternate web page Another handy feature of rate policies – admission control – offers precisely these three options for services that need a guaranteed rate for good performance. You can decide how to handle additional sessions during bandwidth shortages: deny access, squeeze in another user or for web requests, and redirect the request.

Per-Session Priority Policies

Priority policies allocate bandwidth based on a priority, 0 to 7. Small,

non-bursty, latency-sensitive applications such as telnet are good candidates for priority policies with a high priority. In contrast, you might give social media such as YouTube or Facebook a priority of 0 on a business network so that people can access only if the network is not busy.

The following table of priorities offers guidelines only. Of course, different applications are of varying urgencies in different environments, so tailor these suggestions to match your own requirements.

Before and after effects on recreational traffic’s bandwidth usage after using PacketShaper’s rate policies and partitions on select applications.

(9)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

PRIORITY DESCRIPTION

7

Mission-critical, urgent, important, time-sensitive, interactive, transaction-based. Examples might include SAP, Oracle, and a sales website. 6

5

Important, needed, less time-sensitive. Examples might include collaboration and messaging systems, such as Microsoft Exchange. 4

3 Standard service, default, not usually important or unimportant. Examples might include web browsing. 2 Needed, but low-urgency or large file size. Examples might include FTP downloads and large email attachments. 1

Marginal traffic with little or no business importance. Examples might include YouTube, mobile OS updates, music streaming, Internet radio and games. 0

Other Per-Session Policies

PacketShaper offers several other policies in addition to rate and priority policies. They include:

POLICY TYPE POLICY DESCRIPTION USAGE EXAMPLES

DISCARD POLICIES

Discard policies intentionally block traffic. The packets are simply tossed and no feedback is sent back to the sender.

Discard traffic from websites with questionable content. Block attempts to Telnet into your site. Block external FTP requests to your internal FTP server.

NEVER-ADMIT POLICIES

Never-Admit policies are similar to discard policies except that the policy informs the sender of the block.

Redirect music enthusiasts to a web page explaining that streaming audio is allowed only between 10:00 p.m. and 6:00 a.m.

IGNORE POLICIES

Ignore policies simply pass traffic on, not applying any bandwidth management at all.

Let any traffic pass unmanaged that is going to a destination that is not on the other side of the managed WAN access link.

TCP Rate Control

PacketShaper’s patented TCP Rate Control works behind the scenes for all traffic with rate policies, optimizing a limited-capacity link. It overcomes TCP’s shortcomings, proactively preventing congestion on both inbound and outbound traffic. TCP Rate Control paces traffic, telling the end stations to slow down or speed up. It’s no use sending packets any faster if they will be accepted only at a particular rate once they arrive. Rather than discarding packets from a congested queue, TCP Rate Control paces the incoming packets to prevent congestion. It forces a smooth, even flow rate that maximizes throughput.

TCP Rate Control detects real-time flow speed, forecasts packet-arrival times, meters acknowledgments going back to the sender, and modifies

the advertised window sizes sent to the sender. Just as a router manipulates a packet’s header information to influence the packet’s direction, PacketShaper manipulates a packet’s header information to influence the packet’s rate.

Imagine putting fine sand through a straw or small pipe. Sand passes through the straw evenly and quickly. Now imagine putting chunky gravel through the same straw. The gravel gets stuck and arrives in clumps. PacketShaper conditions traffic so that it becomes more like sand than gravel. These smoothly controlled connections are much less likely to incur packet loss, and more importantly, it delivers a smooth and consistent user experience.

(10)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

UDP Rate Control and Queuing

Unlike TCP, UDP sends data to a recipient without establishing a connection and does not attempt to verify that the data arrived intact. Therefore, UDP is referred to as a best-effort, connectionless protocol. The services that UDP provides are minimal – port number multiplexing and an optional checksum error-checking process – so UDP uses less time, bandwidth, and processing over-head than TCP.

While UDP doesn’t offer a high level of error recovery, it still has appeal for certain types of operations. UDP is used mostly by applications that require fast delivery and are not concerned with reliability – DNS, for example. Some UDP applications, such as RealAudio and VoIP, generate persistent, session-oriented traffic. Whenever an application uses UDP for transport, the application must take responsibility for managing the end-to-end connection, handling packet retransmission and other flow-control services native to TCP.

Because UDP doesn’t manage the end-to-end connection, it doesn’t get feedback regarding real-time conditions, and it can’t prevent or adapt to congestion. Therefore, UDP can end up contributing significantly to an overabundance of traffic, impacting all protocols – UDP, TCP, and non-IP included. In addition, latency-sensitive flows, such as VoIP, can be so delayed rendering it to be useless.

UDP Control Mechanisms

PacketShaper combines techniques in rate control and queuing to deliver control over performance to UDP traffic.

PacketShaper is very effective in controlling outbound UDP traffic. When a client requests data from a server, the PacketShaper intervenes and paces the flow of outbound data, regulating the flow of UDP packets before they traverse the congested access link. It can speed urgent UDP traffic or give streams steady access.

Management of inbound traffic presents a bigger challenge. By the time inbound UDP traffic reaches a PacketShaper, it already has crossed the expensive, congested access link, and PacketShaper cannot directly control the link rate. However, PacketShaper can control the inbound UDP traffic rate to the destination host.

The PacketShaper queues incoming UDP packets on a flow-by-flow basis when they are not scheduled for immediate transfer, based on priority and competing traffic. PacketShaper’s UDP queues implement

an important and helpful addition: UDP delay bound. The delay bound defines how long packets can remain buffered before they become too old to be useful. For example, a delay bound of 200ms is appropriate for a streaming audio flow. The delay bound helps avoid retransmissions from holding traffic too long.

Either priority or rate policies are appropriate for UDP traffic classes, depending on the traffic and its goals:

• A priority policy is best for UDP traffic that is transaction oriented • A rate policy is best for persistent UDP traffic (such as streaming

media) because its guaranteed bits-per-second option can ensure a minimum rate for each UDP flow.

Many of the PacketShaper’s other control mechanisms are also appropriate for UDP traffic. UDP traffic management is part of a comprehensive strategy to manage the bandwidth and performance of many types of traffic and applications using PacketShaper’s different control features.

Packet Marking for MPLS and ToS

As discussed earlier, packet marking is a growing trend to ensure speedy treatment across the WAN and across heterogeneous network devices. CoS, ToS, and Diffserv technologies evolved to boost QoS. Multi-Protocol Label Switching (MPLS) is a popular standard for integrating the ability to specify a network path with class of service for consistent QoS. Network convergence of voice, video, and data have spurred interest in MPLS, with the goal of having one network that can support appropriate paths for each service. MPLS is a standards-based technology to improve network performance for select traffic. Traffic normally takes a variety of paths from point A to point B, depending upon each router’s decisions on the appropriate next hop. With MPLS, you define specific paths for specific traffic, identified by a label put in each packet.

The PacketShaper can classify, mark, and remark traffic based on IP CoS/ToS bits, Diffserv settings, and MPLS labels, allowing traffic types to have uniform end-to-end treatment by multivendor devices. By attending to marking and remarking, the PacketShaper can act as a type of universal translator, detecting intentions in one protocol and perpetuating those intentions with a different protocol as it forwards the packets.

(11)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

Enhance MPLS Performance

MPLS has become a leading vehicle for connecting an organization’s distributed locations. Most organizations adopt MPLS to take advantage of different classes of service and ensure appropriate application performance.

However, once MPLS is implemented, business organizations frequently discover that placing key applications into premium service classes does not reap the expected benefits. Why? An MPLS solution degrades as it faces three major challenges:

• The right traffic does not get placed in the right MPLS service class. Premium classes deliver sub-premium performance as they drown in copious non-urgent traffic; important applications are improperly assigned to only best-effort classes.

• Traffic gets hung up in a congested bottleneck just before each entry point to the provider’s MPLS network. In addition, unmanaged traffic heading into a LAN (inbound) grows unruly, using an inappropriately high flow rate.

• Organizations need information on the performance of each application and each service class transported over their MPLS network. Concrete, quantified service-level assessments are rare. The PacketShaper complements MPLS installations and overcomes each of the challenges listed above as it:

• Detects, identifies, and classifies diverse applications, assigning distinct QoS tags. The PacketShaper can mark traffic with MPLS labels directly or can mark traffic with Diffserv tags that relay service-class intentions to the first router within the MPLS cloud.

• Ensures that the traffic within a particular MPLS service class is the right traffic, meant for that class. PacketShaper’s powerful and granular application classification ensures accurate and appropriate MPLS service-class assignments.

• Eases the bottlenecks that form at the entry points to MPLS networks with control features and rate control.

• Extends MPLS performance benefits to the network edge and users’ premises.

• Measures and graphs per application and per-MPLS-class

performance, enabling assessment of service-level agreement (SLA) compliance.

Incidentally, the PacketShaper offers similar features for VLANs that it does for MPLS – classifying traffic by VLAN; pushing, popping, and swapping VLAN identifiers and priorities; and putting each VLAN’s traffic on the right path to its destination.

Scheduling

Sometimes organizations need different control strategies at different times or for different days. For example:

• A middle school prohibits instant messaging during class hours but allows it during lunch or after school.

• A company’s network administrator blocks games and YouTube video on weekdays, but allows them on weekends.

• A sales-ordering application gets twice its usual bandwidth in the last two days of the month because the sales personnel typically deliver the most orders right before each monthly deadline. With PacketShaper’s scheduling features, you can control performance differently at different times. You can vary your configuration details based on the day or the time of day. The choice of day can be daily, weekends, weekdays, specific dates, specific days of the week, and/ or specific dates of the month.

Adaptive and Automated Control Strategies

Most people don’t want to be caught unaware by significant network or application events. Automatic problem detection and notification help, however, problems still remain problems even if someone is notified. The addition of automatic problem resolution really makes a compelling

Effects on bandwidth usage by recreational traffic after using rate policies and partitions on selected applications.

(12)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

difference. With automated correction, you don’t even have to know about the occurrence of a problem in order to fix it, or at least address it temporarily.

PacketShaper’s Adaptive Response feature automatically monitors for conditions of interest. Once found, it can perform any corrective actions you request ahead of time for that problem. Adaptive Response uses features in the PacketShaper to detect the conditions, and to take corrective actions, or to notify. For example, suppose you support SAP on your network, and it’s one of your most critical applications. You have an MPLS WAN core providing four MPLS classes of service. You already deployed PacketShapers to collaborate with MPLS for a more complete QoS solution. You decided to put SAP in the third service class, and the PacketShaper is dutifully marking SAP’s traffic appropriately. You defined a partition for SAP traffic with a minimum size of 15 percent of capacity. In addition, you defined a service-level goal that at least 92 percent of SAP transactions should complete within one and a half seconds.

Everything sounds great. Now, what happens when performance takes a nosedive? Even worse, how about at 4:00 a.m.?

Even without PacketShaper’s adaptive response feature, you are still in good shape — assuming you are available at 4:00 a.m. PacketShaper’s report on response times and service-level compliance highlights the problem, while other reports help diagnose the cause (perhaps FTP bursts, for example). You adjust your partitions and policies’ definitions to solve the problem. You might, for example, create a partition with a maximum for FTP, change FTP’s MPLS service class, and bump SAP’s minimum partition size to 18 percent of capacity. If you were not available, then SAP users would continue to suffer until you arrived to correct the problem.

With adaptive response, your day is different. You receive an email at 4:00 a.m. saying SAP experienced slow performance, and the PacketShaper has taken steps to mitigate the problem until you can investigate the root cause. Until then, SAP users are happy. To get this type of assistance, you configure the adaptive response feature ahead

of time, when you initially configure SAP’s partition and assign its MPLS service class. You define an adaptive response agent to protect SAP’s performance more stringently when SAP’s service-level compliance dips. More specifically, you define an adaptive response agent with the following values:

Condition or metric to monitor

SAP scenario example: Monitor the service-level compliance metric (service-level %) for the SAP traffic class

Threshold that indicates a problem

SAP scenario example: Specify 92 percent as the percentage of SAP transactions that must complete promptly in order for SAP performance to be considered in good shape

Frequency to check the metric or condition

SAP scenario example: Check performance every two minutes • If automatic corrective actions are needed, and, if so, which

actions any PacketShaper CLI (command-line interface) command with any parameters

SAP scenario example: Yes, corrective actions are needed. Create a red action file (executed when the problematic threshold is crossed) that contains CLI commands to:

› Change the SAP partition’s minimum size to 25 percent › Bump SAP’s MPLS class of service to the highest of the four

options

› If notification is needed, and, if so, which method (email, a Syslog server message, or an SNMP trap)

SAP scenario example: Yes, notification is needed. Send an email to yourself stating the percentage of slow SAP transactions and what measures were taken automatically as a stopgap. With this adaptive response configuration in place, you could even get to work late following the 4:00 a.m. mishap and still not be greeted with cranky, frustrated users and urgent service requests.

(13)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

Putting Control Features to Use

You’ve seen a variety of mechanisms to control traffic and its

performance. But discussions of tools, no matter how powerful, aren’t really interesting until you put them to use and see their value. That’s what we’ll do in this section.

Characterizing Traffic

Managing bandwidth allocation for today’s traffic diversity is a definite challenge. Network traffic and applications do not share the same characteristics or requirements. We don’t have the same performance expectations for all traffic. Therefore, before choosing a control strategy, you must first characterize your goals and traffic.

First, consider whether your primary concern is application performance or traffic load. Typically, if you are concerned with keeping customers or employees productive, then you are concerned about application performance. But if you supply bandwidth to users or organizations, and you are not involved with the applications that run over that bandwidth, then you are concerned about capacity and traffic volume.

EXAMPLES WHERE

PERFORMANCE IS FOREMOST

EXAMPLES WHERE LOAD IS FOREMOST

• An enterprise providing applications to staff

• A service provider offering managed applications services to subscribers • A business using B2B or B2C

applications to conduct commerce

• A service that offers contracted amounts of bandwidth to businesses or individuals • A university that supplies each

dormitory room with an equitable portion of bandwidth

If your primary concern is load rather than performance, then skip ahead to Suggestions and Examples.

A good initial approach to managing performance is to manage two traffic categories proactively: traffic that needs to have its performance protected, and traffic that tends to swell to take an unwarranted amount of capacity.

For each type of traffic you want to manage, consider its behavior with respect to four characteristics: importance, time sensitivity, size, and jitter. Each characteristic below has an associated explanation and question to ask yourself, as well as several examples.

Importance

Sometimes the same application can be crucial to one organization’s function and just irritating noise on another’s network.

Ask yourself: Is the traffic critical to organizational success?

IMPORTANT NOT IMPORTANT

• SAP to a manufacturing business • Quake to a provider of gaming

services

• PeopleSoft to a support organization

• Email to a business

• Real Audio to a non-related business

• Games in a business context • Instant messaging in a classroom

Time Sensitivity

Some traffic, although important, is not particularly time sensitive. For example, for most organizations, print traffic is an important part of business. But employees and productivity will probably not be impacted if a print job takes another few seconds to make its way to the printer. In contrast, any critical application that leaves a user poised on top of the Enter key waiting for a response is definitely time sensitive. Ask yourself: Is the traffic interactive or particularly latency sensitive?

TIME SENSITIVE NOT TIME SENSITIVE

• Telnet

• Citrix-based, interactive application • Oracle

• VoIP, Web Conferencing, online training

• Print • Email • File transfers

For important and time-sensitive traffic, consider using a high priority in a priority policy (for small flows) or in a rate policy (for other flows). Consider a partition with a minimum size.

(14)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

Size

A traffic session that tends to swell to use increasing amounts of bandwidth and produces large surges of packets is said to be “bursty.” TCP’s slow start algorithm creates or exacerbates traffic’s tendency to burst. As TCP attempts to address the sudden demand of a bursting connection, congestion and retransmissions occur.

Applications such as FTP, multimedia components of HTTP traffic, print, and video streaming are considered bursty since they generate large amounts of download data.

Users’ expectations for this traffic depend on the context. For example, if a large multimedia file is being downloaded for later use, the user may not require high speed as much as steady progress and the assurance that the download won’t have to be restarted.

Ask yourself: Are flows large and bandwidth hungry, expanding to consume all available bandwidth?

LARGE AND BURSTY SMALL AND NOT BURSTY

• Video streaming

• Email with large attachments • Print

• Telnet • ICMP • TN3270

For large and bursty traffic, consider a partition with a limit. If the bursty traffic is important, consider a partition with both a minimum and a maximum size. Consider a rate policy with a per-session limit if you are concerned that one high-capacity user might impact others using the same application. Consider a policy with a low or medium priority, depending on importance.

For small, non-bursty flows, consider a priority policy. Use a high priority if the small flow is important.

Jitter

An application that is played (video or audio) as it arrives at its network destination is said to stream. A streaming application needs a minimum bits-per-second rate to deliver smooth and satisfactory performance. Streaming media that arrives with stutter and static is not likely to gain many fans. On the other hand, too many fans can undermine performance for everyone, including users of other types of applications. Ask yourself: Does the traffic require smooth consistent delivery or it loses value?

PRONE TO JITTER NOT PRONE TO JITTER

• VoIP • Email • WindowsMedia • Print • Real Audio • MS SQL • Distance-learning applications • AppleTalk

For jitter-prone traffic, especially if it is also important, consider a rate policy with a per-session guarantee. If too many users might swamp a service, use a partition with a limit and admission control features.

(15)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

Suggestions and Examples

The following table lists a few common traffic types, their characteristics, typical behavior, desired behavior, and control configuration suggestions.

TRAFFIC TYPES IMPORTANT TIME SENSITIVITY SIZEABLE/ BURSTY JITTER PRONE USUAL UNDESIRED BEHAVIOR DESIRED BEHAVIOR CONFIGURATION SUGGESTIONS FTP √ √ Stuck progress indicators; peaks clog WAN access, slowing more time-sensitive applications

No stalled sessions; sustained download progress; paced bursts

Rate policy with 0 guaranteed, burstable, medium priority; Partition to contain the aggregate of all traffic WEB

BROWSING Varies √ √

Unpredictable dis- play and delay times

Prompt, consistent display

Rate policy with 0 guaranteed, burstable, medium priority, optional per-session cap WEB-BASED APPLICATIONS √ √ Insufficient and/or inconsistent response times Prompt, consistent response

Rate policy with optional per- session guarantee, burstable, high priority, optional per-session cap; Consider a partition with a min size to protect the application TELNET

√ √

Slow, inconsistent performance

Immediate transfer for prompt response times; small size won’t impact others

Priority policy with a high priority MUSIC DOWNLOADS IN A BUSINESS ENVIRONMENT √

Bursts and abundant downloads clog WAN access and undermine time-sensitive applications

Contained down- loads using a small portion (or none) of network resources

Rate policy with 0 guaranteed, burstable, priority 0; Partition to contain the aggregate of all users to less than 5 percent of capacity (or as desired) SAP

√ √

Slow and unpredictable response times

Swift, consistent performance

Rate policy with 0 guaranteed, burstable, high priority; Partition with a min size to protect all SAP traffic A CONTRACTED

AMOUNT OF BANDWIDTH

Don’t care Don’t care Don’t care Don’t care

Some users claim more than their fair share and others are shorted

They get what they pay for

Dynamic partition to allocate each user’s or each subnet’s bandwidth equitably VOICE OVER IP Dynamic partition to allocate each user’s or each subnet’s bandwidth

For More Information

The Blue Coat PacketShaper helps enterprises to control bandwidth costs, deliver a superior user experience and align network resources with business priorities. In summary, PacketShaper offers application level visibility and policy-based bandwidth allocation to boost or curb application performance over the WAN and Internet. Learn more about PacketShaper on our website at bluecoat.com/products/packetshaper.

(16)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

APPENDIX A

How TCP Rate Control Works

TCP Review

The Transmission Control Protocol (TCP) provides connection-oriented services for the protocol suite’s application layer – that is, a client and a server must establish a connection to exchange data. TCP transmits data in segments encased in IP datagrams, along with checksums, used to detect data corruption, and sequence numbers to ensure an ordered byte stream. TCP is considered to be a reliable transport mechanism because it requires the receiving computer to acknowledge not only the receipt of data but also its completeness and sequence. If the sending computer doesn’t receive notification from the receiving computer within an expected time frame, the sender times out and retransmits the segment.

TCP uses a sliding-window flow-control mechanism to control throughput over wide-area networks. As the receiver acknowledges initial receipt of data, it advertises how much data it can handle, called its window size. The sender can transmit multiple packets, up to the recipient’s window size, before it stops and waits for an acknowledgment. The sender fills the pipe, waits for an acknowledgment, and fills the pipe again.

While the receiver typically handles TCP flow control, TCP’s slow-start algorithm is a flow-control mechanism managed by the sender that is designed to take full advantage of network capacity. When a connection opens, only one packet is sent until an ACK is received. For each received ACK, the sender can double the transmission size, within bounds of the recipient’s window.

TCP’s congestion-avoidance mechanism attempts to alleviate the problem of abundant packets filling up router queues. TCP increases a connection’s transmission rate using the slow-start algorithm until it senses a problem and then it backs off. It interprets dropped packets and/or timeouts as signs of congestion. The goal of TCP is for individual connections to burst on demand to use all available bandwidth, while at the same time reacting conservatively to inferred problems in order to alleviate congestion.

TCP Rate Control

Traffic consists of chunks of data that accumulate at access links where speed conversion occurs. To eliminate the chunks, TCP Rate Control paces or smoothes the flow by detecting a remote user’s access speed, factoring in network latency, and correlating this data with other traffic flow information. Rather than queuing data that passes through the box and metering it out at the appropriate rate, PacketShaper induces the sender to send just-in-time data. By changing the traffic chunks, or bursts, to optimally sized and timed packets, PacketShaper improves network efficiency, increases throughput, and delivers more consistent, predictable, and prompt response times.

TCP Rate Control uses three methods to control the rate of transmissions:

• Detects real-time flow speed

• Meters acknowledgments going back to the sender • Modifies the advertised window sizes sent to the sender

Just as a router manipulates a packet’s header information to influence the packet’s direction, PacketShaper manipulates a packet’s header information to influence the packet’s rate. TCP autobaud is Blue Coat’s technology that allows appliances to automatically detect the connection speed of the client or server at the other end of the connection. This speed-detection mechanism allows PacketShaper to adapt bandwidth-management strategies even as conditions vary. PacketShaper incorporates a predictive scheduler that anticipates bandwidth needs and meters the ACKs and window sizes accordingly. It uses autobaud, known TCP behaviors, and bandwidth-allocation policies as predictive criteria.

PacketShaper changes the end-to-end TCP semantics from its position in the middle of a connection. First, using autobaud, it determines a connection’s transfer rate to use as a basis on which to time transmissions. The PacketShaper intercepts a transaction’s acknowledgment and holds onto it for the amount of time that is required to smooth the traffic flow and increase throughput without incurring retransmission timeout. It also supplies a window size that helps the sender determine when to send the next packet and how much to send in order to optimize the real-time connection rate.

(17)

Security

Empowers

Business

EFFECTIVELY MANAGING WAN,

INTERNET ACCESS LINK AND

APPLICATION TRAFFIC

©

BLUE COAT SYSTEMS, INC

Evenly spaced packet transmissions yield significant multiplexing gains in the network. As packet bursts are eliminated, network utilization can increase up to 80 percent. Packet spacing also avoids the queuing bias imposed by weighted fair queuing schemes, which force packet bursts to the end of a queue, giving preference to low-volume traffic streams. Thus, sand-like packet transmissions yield increased network utilization and proceed cleanly through weighted fair queues.

In this packet diagram, PacketShaper intervenes and paces the data transmission to deliver predictable service.

The sequence described by the packet diagram includes: • A data segment is sent to

the receiver.

• The receiver acknowledges receipt and advertises an 8000-byte window size. • PacketShaper intercepts

the ACK and determines that the data must be transmitted more evenly. Otherwise, subsequent data segments will queue up and packets will be tossed

because insufficient bandwidth is available. In addition, more urgent and smaller packets from interactive applications would be held behind the flood of this more bulky data.

• PacketShaper revises the ACK to the sender; the sender immediately emits data according to the revised window size.

The following illustration provides an example of the traffic patterns when natural TCP algorithms are used. Note that the second packet must be transmitted twice because the sender did not get the acknowledgement in time that it was received, an unnecessary waste. Near the bottom, observe the packet burst that occurs. This is quite typical of TCP’s slow start (but huge later) growth and is what causes congestion and buffer overflow. The figure on the right provides an example of the evenly spaced data transmissions that occur when TCP Rate Control is active. This even spacing not only reduces router queues but also helps increase the average throughput performance since it uses more of the bandwidth more of the time.

Without PacketShaper: Chunky traffic flow, less throughput, bursty

sporadic transfer, more retransmissions.

With PacketShaper: Smooth traffic flow, more throughput, consistent

(18)

Security

Empowers

Business

Blue Coat Systems Inc.

www.bluecoat.com

Corporate Headquarters

Sunnyvale, CA

+1.408.220.2200

EMEA Headquarters

Hampshire, UK

+44.1252.554600

APAC Headquarters

Singapore

+65.6826.7000

© 2015 Blue Coat Systems, Inc. All rights reserved. Blue Coat, the Blue Coat logos, ProxySG, PacketShaper, CacheFlow, IntelligenceCenter, CacheOS, CachePulse, Crossbeam, K9, the K9 logo, DRTR, MACH5, PacketWise, Policycenter, ProxyAV, ProxyClient, SGOS, WebPulse, Solera Networks, the Solera Networks logos, DeepSee, “See Everything. Know Everything.”, “Security Empowers Business”, and BlueTouch are registered trademarks or trademarks of Blue Coat Systems, Inc. or its affiliates in the U.S. and certain other countries. This list may not be complete, and the absence of a trademark from this list does not mean it is not a trademark of Blue Coat or that Blue Coat has stopped using the trademark. All other trademarks mentioned in this document owned by third parties are the property of their respective owners. This document is for informational purposes only. Blue Coat makes no warranties, express, implied, or statutory, as to the information in this document. Blue Coat products, technical services, and any other technical data referenced in this document are subject to U.S. export control and sanctions laws, regulations and requirements, and may be subject to export or import regulations in other countries. You agree to comply strictly with these laws, regulations and requirements, and acknowledge that you have the responsibility to obtain any licenses, permits or other approvals that may be required in order to export, re-export, transfer in country or import after delivery to you.

References

Related documents

Using a traffic control strategy in relay nodes, we have showed how to invert UDP and TCP delay performance in favour of UDP traffic and determine further

• Ensure inbound and outbound traffic management over best performing WAN link • Provide WAN link load balancing for ample bandwidth for critical applications •

The cmodels(diff) system translates a logic program into smt(il) formulas, after which an SMT solver is called to find models of these formulas (that correspond to answer sets).. (1,

The interface: the toolbar, status bar, panels & inspectors, creating basic Web page with text, converting text into a list, creating web site using site window,

DECEMBER Michael and Mouna Dahlan and Family Charles and Diana O’Brien. Afif and Jumana Sakas

This UK study explored the processes and outcomes of a mentoring project that involved year 10 (Y10) mentors from a large secondary school mentoring Year six (Y6) pupils in

If, during forwards travel, the selector lever and with it the selector slide in the mechatronic unit is moved to the reverse gear position then N88 is activated accordingly by the

Until schemes have passed the benchmark- ing exercise against the 2021 version of the Guidelines, schemes are re- quested to make clear that their compliance is in relation to the