• No results found

Enterprise Network Control and Management: Traffic Flow Models

N/A
N/A
Protected

Academic year: 2021

Share "Enterprise Network Control and Management: Traffic Flow Models"

Copied!
5
0
0

Loading.... (view fulltext now)

Full text

(1)

Enterprise Network Control and Management:

Traffic Flow Models

William Maruyama, Mark George, Eileen Hernandez, Keith LoPresto and Yea Uang

Interactive Technology Center Lockheed Martin Mission Systems

1260 Crossman Avenue Sunnyvale, CA 94089 (408) 734-6732, Fax (408) 734-6034

E-mail:bill.maruyama@lmco.com

Abstract

- The exponential growth and dramatic increase

in demand for network bandwidth is expanding the market for broadband satellite networks. It is critical to rapidly deliver ubiquitous satellite communication networks that are differentiated by lower cost and increased Quality of Service (QoS). There is a need to develop new network architectures, control and management systems to meet the future commercial and military traffic requirements, services and applications. The next generation communication networks must support legacy and emerging network traffic while providing user negotiated levels of QoS. Network resources control algorithms must be designed to provide the guaranteed performance levels for voice, video and data having different service requirements. To evaluate network architectures and performance, it is essential to understand the network traffic characteristics.

I. INTRODUCTION

This paper provides a baseline enterprise network traffic model characterized in terms of application, protocol, packet and byte distributions. The metrics are defined by protocol, percentage compositions of traffic and arrival distributions. A Hybrid Network Testbed (HyNeT) and a network management tool suite, the Integrated Network Monitoring System (INMS), were developed by the Interactive Technology Center (ITC), Lockheed Martin Mission Systems to automate the complex process of performing network evaluations. This unique hardware based network simulation capability is utilized to generate, monitor and record network performance metrics. These characteristics are important for evaluating design decisions to specify the placement of network services, component algorithms and resource allocations to operate an efficient communication network.

The traffic models and methodologies described in this paper are based on the data captured by the INMS from a Lockheed Martin network segment. The traffic sampling is representative of a network segment within a multinational

corporation requiring high bandwidth network connectivity across geographically dispersed locations. The data set includes Simple Network Management Protocol (SNMP), Remote Monitoring (RMON) and Packet Trace metrics. The INMS data archive is continually expanded to include additional time and topological sampling points for further analysis and trend studies. Samples were selected to develop traffic flow characteristics for network protocols to be used to specify flows for performance, accounting and bundling. The HyNeT provides "hardware in the loop" simulation for high fidelity analysis of communication network architectures utilizing the traffic models. Currently, the traffic data is being used to characterize the future traffic demands for an advanced military network system.

II. NETWORK TOPOLOGY

The data samples were collected at the uplink interface for the local segment consisting of approximately 50 subnets supporting 7400 workstations. This point of interest, depicted in Fig. 1, reference point (A), is a candidate for satellite replacement, augmentation or secondary backup. The Local Area Network (LAN) segment is currently connected into the corporate Wide Area Network (WAN) via a 10-megabit/sec Permanent Virtual Circuit (PVC) allocated from a (Digital Service, level-3 (DS-3) Asynchronous Transfer Mode (ATM) service. Regional Segment Regional Segment Regional Segment Regional Segment Internet Internet A DS-3 DS-3 DS-3 DS-3 DS-3 Local Campus Segment 10mb pvc 10mb pvc 10mb pvc 10mb pvc 10mb pvc 10mb pvc 10 mb pvc 10mb pvc 10 mb pvc 10mb pvc

Corporate ATM Backbone

(2)

Distributed network monitoring equipment was deployed at key reference points. The corporate backbone uplink and interfaces into the local campus backbone were instrumented. The majority of the local network utilizes Ethernet technology. The campus backbone employs 100-megabit switched Ethernet technology before connecting into the corporate ATM cloud. The data utilized in this report was collected from a 100-megabit Ethernet SPAN (Switched Port Analyzer) port enabling monitoring traffic of the ATM uplink, depicted in Fig. 2.

Corporate ATM Backbone

VLAN VLAN

Local Campus Segment Backbone Swtiches and Routers

ATM DS-3

Fast Ethernet Fast Ethernet

Network Sniffer RMON2 Probe SPANS A BLD SEGSBLD SEGSBLD SEGS BLD SEGSBLD SEGSBLD SEGS BLD SEGSBLD SEGSBLD SEGS BLD SEGSBLD SEGSBLD SEGS B

Fig. 2. Network Monitoring Points

III. MEASUREMENT METHODOLOGY

The selected network statistics consist of traffic volume categorized by bytes, packets and flows. Detailed information includes distributions of traffic composition categorized by network protocols and flows. The mechanism utilized to collect data consists of several monitoring techniques, Simple Network Management Protocol (SNMP), Remote Monitoring version 2 (RMON-2) and Packet Analyzer polls. These techniques were used to provide a complete picture of the traffic consisting of statistical data for a 24-hour time period, with high-resolution traces for smaller sample periods.

Interface packet and byte counts were extracted by the INMS SNMP agent queries of the routers and switches. The INMS also queried packet and protocol statistics captured by passive RMON2 monitoring probes providing higher-level application information categorized by TCP and UDP port assignments. The SNMPv2 and RMON-2 statistics provide network utilization data providing 24-hour by 7-day coverage.

An Ethernet Packet Analyzer tool was used to capture the first 64 bytes of every packet. Saving this detailed protocol information created extremely large data sets for even short periods of time. Detailed packet categorization and flow information was performed by custom ITC software. The

packet analysis software extracts the packet arrival time, length, source and destination IP addresses, transport type, source and destination ports and data length. The analysis module tracks flows defined by a unique <source IP

address, source port, destination IP address, destination port, transport protocol> quintuple. Flow related statistics

are reported every 1-second (tunable) interval. Flows that have not seen a packet for a 1-second (tunable) period are considered expired and removed from the list of active connections.

IV. NETWORK TRAFFIC UTILIZATION

The INMS SNMP agents collected data on March 2, 1999 providing a 24-hour view of traffic sampled at 5-minute intervals shown in Fig. 3. As expected the network activity is higher during working hours. The network traffic is highest during normal working hours between 7am to 8pm with network utilization running at 5 megabits/second, (8% utilization) peeking around 11am-1pm at 12-megabits/second (27% utilization). The percentage utilization is reflective of the directly connected 100-megabit Ethernet link. However the uplink capacity is limited to the 45 megabit DS-3 interface connecting the local campus to the corporate backbone.

B yte and P ack et U tilizatio n on M arc h 02, 1999

1 10 100 1000 10000 100000 1000000

D ate and Tim e ( 5-sec. intervals)

B y tes an d P acket s Input O ctets 17835 29255 35767 20073 27416 90694 2E +05 2E + 05 2E + 05 1E +05 3E +05 2E + 05 1E + 05 1E +05 O utput O ctets 35370 46082 79450 42402 59492 2E + 05 3E +05 3E + 05 3E + 05 2E +05 4E +05 3E + 05 3E + 05 2E +05 Input U ni- C ast 258.5 240.3 287.8 169 417.2 918.8 11671265 1101 1201 1233 669.9 714.2 454.1 O utput U ni-C ast 242.4 227.7 287.5 161.2 394.7 874.7 1168 1138 1081 1109 1289 674.2 724.6 439.8 Input M u lti-C as t 4.35 8.32 8.73 4.6 6.33 6.98 4.98 5.08 5.38 5.53 4.62 4.5 8.68 4.38 O utput M ulti- C ast 2.08 6.65 6.67 2.07 3.92 3.38 2.17 2.48 2.4 2.83 1.98 2.28 6.3 2.2 Input E r rors 0 0 0 0 0 0 0 0 0 0 0 0 0 0 O utput E rrors 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3/2/99 :0:0 3/2/99 :1:43 3/2/99 :3:26 3/2/99 :5:9 3/2/99 :6:51 3/2/99 :8:34 3/2/99 :10:17 3/2/99 :12:0 3/2/99 :13:43 3/2/99 :15:26 3/2/99 :17:9 3/2/99 :18:51 3/2/99 :20:34 3/2/99 :22:17

Fig. 3 SNMP Link Utilization Data

The INMS RMON2 agents collected data for January 29, 1999 providing a 24-hour view of traffic sampled at 5-minute intervals. The pre-configured protocols and applications were captured on the local campus uplink. As expected, IP is the dominant network protocol comprising 95% of the total network traffic. The non-IP traffic maintained a relatively stable utilization while the IP traffic utilization fluctuated with the working hours. A utilization spike of roughly 5-megabits/sec, roughly double normal traffic was recorded for January 29, 1999 as depicted in Fig. 4.

(3)

Netw ork Protocol Utilization on January 29, 1999 0 2 4 6 8 10 12 14 16

Date / Time (5-m in. Intervals)

U tiliz a tio n P e rc e n ta g e IP 6.667 7.953 4.934 2.698 1.971 0.602 3.455 6.754 8.419 6.868 2.022 2.1 0.825 2.589 6.5386.92 IPX 0.07 0.066 0.073 0.053 0.068 0.066 0.062 0.088 0.076 0.069 0.073 0.062 0.068 0.062 0.06 0.058 ATALK 0.023 0.173 0.023 0.024 0.018 0.024 0.028 0.031 0.074 0.020.013 0.015 0.015 0.02 0.034 0.019 DECNET 0.003 0.004 0.004 0.002 0.320.002 0.002 0.003 0.003 0.006 0.015 0.331 0.011 0.006 0.002 0.004 1/28/9 9:9:40 :55 1/28/9 9:13:0 :55 1/28/9 9:16:2 5:55 1/28/9 9:19:4 5:55 1/28/9 9:23:5 :55 1/29/9 9:2:25 :55 1/29/9 9:5:50 :55 1/29/9 9:9:10 :55 1/29/9 9:12:3 0:55 1/29/9 9:15:5 0:55 1/29/9 9:19:1 5:55 1/29/9 9:22:3 5:55 1/30/9 9:1:55 :55 1/30/9 9:5:15 :55 1/30/9 9:8:40 :55 1/30/9 9:12:0 :55

Fig. 4. Network Protocol Utilization

The INMS RMON agents also collected IP protocol counters providing a 24-hour view of TCP/IP and UDP/IP utilization for January 29, 1999. The traffic is depicted in Fig. 5. IP traffic comprised 95% of the network protocol bytes flowing on the uplink. TCP was the dominant IP protocol comprising 90% of the IP traffic. Similar protocol distributions were observed in 1997 [1]. The ratio of TCP to UDP bytes was roughly 10:1. The TCP traffic was user generated fluctuating widely, rising and falling with the working hours. UDP traffic was relatively constant generated autonomously. The increased utilization of the network links was due to TCP traffic.

IP Protocol Utilization on January 29, 1999

0 2 4 6 8 10 12 14 16

D ate & T ime (5-m in . In tervals)

Pe rc e n t U tiliz a tio n ICM P (IP -1) 0.014 0.011 0.013 0.015 0.024 0.014 0.015 0.014 0.016 0.016 0.015 0.014 0.014 0.014 0.013 T CP (IP-6) 1.508 0.754 1.024 2.249 3.413 5.361 7.384 11.24 8.523 7.994 4.928 3.366 1.911 1.519 1.691 UD P (IP -17)0.066 0.067 0.067 0.082 0.195 0.104 0.103 0.105 0.095 0.1 0.071 0.068 0.064 0.068 0.062 IP~ 0.046 0.024 0.016 0.013 0.026 0.053 0.128 0.063 0.214 0.047 0.078 0.049 0.033 0.002 0.009 1/29/9 9:0:0: 55 1/29/9 9:1:35 :55 1/29/9 9:3:15 :55 1/29/9 9:4:50 :55 1/29/9 9:6:25 :55 1/29/9 9:8:0: 55 1/29/9 9:9:40 :55 1/29/9 9:11:1 5:55 1/29/9 9:12:5 0:55 1/29/9 9:14:2 5:55 1/29/9 9:16:5 :55 1/29/9 9:17:4 0:55 1/29/9 9:19:1 5:55 1/29/9 9:20:5 0:55 1/29/9 9:22:3 0:55

Fig. 5. IP Protocol Utilization

The INMS RMON2 agents captured common IP applications. Fig. 6 depicts seven application utilization percentages. HTTP traffic contributed 78% of the total traffic for the 24-hour period. FTP data comprised 7% of the total traffic and SMTP comprised 11% of the total traffic. The fluctuation in HTTP traffic drove the utilization of the link. FTP utilization occasionally peeked utilizing up to 5-megabits/sec of the network bandwidth, comprising close to 50% of the network traffic. The combined HTTP and FTP data closely reflect the aggregate traffic curves.

TCP Port Utilization on January 29, 1999

0 1 2 3 4 5 6 7

Date & Tim e (5-min. Intervals)

P e rc e n t U tiliz a tio n FTP-Data 0.265 0.009 0.506 0.008 0.006 0.039 0.054 2.639 0.089 0.093 0.028 0.567 0.006 1E-04 0.002 Telnet 0.005 0.004 0.004 0.012 0.046 0.079 0.105 0.074 0.104 0.101 0.065 0.025 0.007 0.002 0.004 SM TP 0.014 0.019 0.015 0.032 0.196 0.365 0.481 0.87 0.416 1.183 0.238 0.093 0.068 0.031 0.025 HTTP 0.176 0.112 0.172 0.369 1.734 3.169 4.889 4.419 5.29 4.487 3.444 1.629 0.754 0.639 0.458 Port #1525 0 0 0 2E-04 0.002 0.002 0.002 0.003 0.007 0.001 0.001 0.001 0 0 0 X-Server 0.01 0.005 0.005 0.005 0.005 0.005 0.016 0.012 0.006 0.026 0.008 0.005 0.005 0.005 0.005 HTTPS 0 0 0 0 0 0.002 0.013 0.001 3E-04 8E-04 2E-04 0 0 0 0

1/29/9 9:0:0: 55 1/29/9 9:1:35 :55 1/29/9 9:3:15 :55 1/29/9 9:4:50 :55 1/29/9 9:6:25 :55 1/29/9 9:8:0: 55 1/29/9 9:9:40 :55 1/29/9 9:11:1 5:55 1/29/9 9:12:5 0:55 1/29/9 9:14:2 5:55 1/29/9 9:16:5 :55 1/29/9 9:17:4 0:55 1/29/9 9:19:1 5:55 1/29/9 9:20:5 0:55 1/29/9 9:22:3 0:55

Fig. 6. TCP Port Utilization

Early Internet studies [2] summarizing traffic growth trends on the NSFNET backbone, showed exponential growth in 1994. Web traffic began to overtake the dominant file transfer and mail applications. In April 1995, the traffic distribution by packet count was: Other (27%), HTTP (21%), FTP-data (14%), NNTP (8%), Telnet (8%), SMTP (6%), and Domain (%5). A 1998 MCI/vBNS Internet study [3] reported the predominant traffic was HTTP comprising (70%) of the packets. Other applications contributing significant percentages of traffic had reduced contributions, FTP-data (3%), NNTP (1%), Telnet (1%), SMTP (5 %) and Domain (3%). HTTP is the application driving bandwidth utilization on the Internet and on corporate networks.

V. NETWORK TRAFFIC FLOWS

A 7-minute packet trace was taken from the local campus uplink on February 22, 1999. The packet trace data verified similar bandwidth utilization as the previous measurement techniques. This section of the report utilizes the flow concept to provide additional traffic information. The general definition of a flow [2] is a sequence of packets traveling from a source to destination, without assuming any particular type of service. The particular definition used for this section defines a flow as uni-directional, distinguished by the source, destination and application. The flow reporting and timeout parameters selected for the graphs and analysis of this report were set to 1-second timing intervals.

The ITC Packet Analysis software was used to produce bandwidth utilization graphs for the aggregate traffic transmitted on the campus uplink as depicted in Fig. 7. The TCP and UDP bytes reflected the same utilization as reported by the RMON2 agents. Within the 7-minute sample period the aggregate traffic peaked at 11-Mbit/sec and averaged 6-Mbit/sec. The TCP/UDP byte ratio was approximately 5:1. The average link utilization was 750Kbytes/sec with a standard deviation of 220Kbytes/sec. The link averaged 1500 packets/sec with a standard deviation of 315 packets/sec.

(4)

80,000 flows were detected averaging 190 new flows per second. For the 1-second reporting period, the software maintained the state for roughly 340 active flows.

Traffic Flow s on February 22, 1999

1 10 100 1000 10000 100000 1000000 10000000

Capture Period (1-sec. Intervals)

B y tes, P acket s an d F lo w s TC PFlow s 268 249 234 318 313 278 256 328 240 222 232 285 232 330 311 331 342 320 295 299 333 316 299 TC PP ackets136 115 101 154 154 142 123 162 114 974 116 210 109 123 160 174 177 175 187 176 186 134 162 TC PB ytes 6E + 5E+ 5E + 7E+ 7E + 6E+ 6E + 8E+ 5E + 4E + 6E+ 1E + 5E+ 5E + 8E+ 1E + 9E + 9E + 1E + 1E + 8E + 6E+ 9E + U DP Flow s 52 24 44 28 34 31 35 45 100 22 27 24 40 52 4640 38 40 32 75 38 22 25 U DP Packets 73 33 49 30 60 57 75 92 102 34 41 32 55 60 9257 65 48 70 188 58 51 56 U DP Bytes 108 402 629 349 712 954 891 124 202 471 474 332 738 702 108 798 909 514 952 295 698 546 104 A GG Flow s 320 273 278 346 347 309 291 373 340 244 259 309 272 382 357 371 380 360 327 374 371 338 324 A GG P ackets 143 118 106 157 160 147 131 171 125 100 120 213 114 129 170 179 184 180 194 195 192 139 167 A GG B ytes 6E + 5E+ 5E + 7E+ 7E + 7E+ 6E + 8E+ 5E + 4E + 6E+ 1E + 5E+ 5E + 8E+ 1E + 9E + 9E + 1E + 1E + 8E + 6E+ 1E + 1 19 38 56 75 93 112 130 148 167 185 204 222 241 259 278 296 314 333 351 370 388 407

Fig. 7. Traffic Flows - Bytes, Packets and Flows

The definition of the flow timeout parameter and reporting period had significant effects on the number of active flows. The larger the timeout period, the larger the flow tables, and with that the processing overhead to maintain the active flow states. The flow definition is designed specifically not to utilize protocol state information, it utilizes an expiration timeout to determine the end of a flow. This report utilized a timeout expiration of 1-second in an attempt to accurately map flows to application level connections while minimizing the number of active flow maintained for processing.

The following table lists the number of flows, active flows and new flows per second maintain at the defined expiration timeout period. Flow Timeout Value (Secs.) 0.250 0.5 1 5 10 60 Number of Flows 136000 9500 0 80000 60000 55000 49000 Avr. Number of Active Flows 170* 230* 340 1000 1600 7000

New Flows per Second

320 225 190 140 130 115

* The average number of active flows is reported for a 1-second interval, if the expiration timer is less than 1 second the interval is equal to the flow timeout value.

This report utilized a 1-second timeout and a 1-second reporting period to map flows application connections. Utilizing smaller timeouts and reporting periods may provide more accurate flow mappings for most applications, although some application connections would be mapped into multiple flows. Small timeouts and reporting periods enable real-time

flow information that could be utilized to manage router queues to provide some level of QoS.

Fig. 8. depicts the flow size distribution. 21% of the flows are less than 99 bytes, 99% of the flows are smaller than 10K bytes.

Flow Byte Distribution on February 22, 1999

0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

Flow Sizes in Byte

N u m b e r of Fl ow s Aggregate 17546 10553 55863243 3882 4162 5092 60301749 15665 3566 166 27 3 0 0 DNS 1342 506 289 195 16 18 7 7 10 99 2 0 0 0 0 0 FTP 13 10 4 5 5 1 2 2 2 17 14 4 1 0 0 0 HTTP 9360 5050 981 2020 2776 3207 4567 56751439 11315 3024 87 6 0 0 0 SMTP 134 13 24 25 10 2 4 6 41 630 41 19 5 1 0 0 SNMP 109 1394 85 12 323 37 62 56 6 131 12 0 0 0 0 0 Telnet 442 1092 375 197 139 91 94 70 92 867 17 5 0 0 0 0 U nder 99 100to 199 200to 299 300to 399 400to 499 500to 599 600to 699 700to 799 900to 999 1kto9 999 10k 100k 1M 10M 100M 1G

Fig. 8. Flow Byte Distributions

Fig. 9 depicts, 31% of the packets are transferred within a millisecond. The majority of these transfers are single packet flows. The packet analyzer code uses a 1/2 millisecond constant to pad the last packet transfer time for each flow. Single packet flow duration default to this pad. 40% of the flows are between 1/2 to 5 seconds in duration.

Flow Rate Distribution on February 22, 1999

0 2000 4000 6000 8000 10000 12000 14000 Data Rate KB Fl ow s ALL 141 2183 6450 9683 7940 13200 5787 2924 1404 3212 1033 265 114859883 4737 DNS 8 48 124 76 52 51 3 0 3 1 2 2 1 1709 426 FTP 0 0 2 14 8 12 8 1 2 6 11 1 13 1 1 HTTP 18 1026 4885 8174 6650 10307 4292 2380 1089 2502 571 94 9383 255 595 SMTP 1 8 10 211 205 229 105 16 6 10 15 11 136 3 0 SNMP 1 11 5 45 2 48 122 103 35 33 10 6 6 1728 102 Telnet 18 692 721 311 145 470 280 118 46 102 69 56 324 152 76 1 2.5 5 7.5 10 25 50 75 100 250 500 750 1000 2500 2500+

Fig. 9. Flow Rate Distributions

The calculated data rates show a surprising 18% of the transfers exceed 1-Mbit/second data rate, Fig. 10.

(5)

F l o w D u r a t i o n D i s t r i b u t i o n o n F e b r u a r y 2 2 , 1 9 9 9 0 5 0 0 0 1 0 0 0 0 1 5 0 0 0 2 0 0 0 0 2 5 0 0 0 3 0 0 0 0 D u r a t io n m s Flows A L L 2 5 3 7 1 7 8 6 2 2 4 1 0 3 9 1 8 8 9 1 8 4 8 7 1 9 2 6 9 1 2 4 5 9 4 7 5 2 4 8 4 1 3 9 0 0 D N S 2 1 3 1 7 3 2 3 1 6 7 3 8 1 1 1 3 3 1 1 0 0 0 0 F T P 1 4 1 0 0 0 2 6 2 6 9 0 4 0 0 0 0 H T T P 1 0 0 1 7 2 8 5 4 0 1 8 5 1 3 3 5 1 3 7 3 7 1 7 0 8 5 9 3 1 4 1 2 0 7 5 9 1 9 0 0 S M T P 1 3 4 0 0 0 3 5 8 8 7 6 5 8 1 3 1 1 0 2 0 0 S N M P 1 5 6 5 2 7 2 9 3 7 1 1 4 1 6 5 6 6 1 9 0 1 0 0 0 0 0 T e l n e t 5 3 0 2 9 2 1 2 2 5 4 9 1 2 6 7 7 6 0 6 1 4 5 2 1 8 7 8 0 0 1 5 1 0 5 0 1 0 0 5 0 0 1 0 0 0 5 0 0 0 1 0 0 0 0 5 0 0 0 01 0 0 0 0 0 5 0 0 0 0 01 0 0 0 0 00 1 0 0 0 0 0 0 +

Fig. 10. Flow Duration Distributions

Fig. 11. depicts the number of packet per flow. 30% of the flows contain a single packet, 90% of the flows consist of 10 or less packets. F l o w P a c k e t C o u n t D i s t r ib u ti o n o n F e b r u a r y 2 2 , 1 9 9 9 0 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0 4 0 0 0 0 5 0 0 0 0 6 0 0 0 0 P a c k e ts p e r F l o w Fl ow s A L L 2 4 6 8 5 4 7 7 6 4 7 0 4 7 4 8 4 2 6 0 2 3 5 4 1 0 D N S 2 1 2 4 2 8 5 8 6 8 3 0 0 0 F T P 1 4 4 0 1 2 3 1 0 1 0 0 H T T P 9 6 6 6 3 7 4 8 9 4 8 1 7 1 3 9 8 6 4 2 0 0 S M T P 1 3 4 2 3 9 5 4 0 1 6 2 7 6 3 1 S N M P 1 4 9 4 6 6 3 8 4 8 8 0 0 0 T e ln e t 5 2 5 2 7 8 8 2 3 9 1 3 8 1 5 1 1 1 0 5 0 1 0 0 5 0 0 1 0 0 0 5 0 0 0 5 0 0 0 +

Fig. 11. Packet Count Distributions

Fig. 12 depicts the number of data bytes per packet. 45% of the packets contain no user data. These packets can be accounted for by TCP acknowledgments and other network control messages. Some TCP optimizations cluster these packets conserve bandwidth.

F lo w B y te s p e r P a c k e t 0 50000 1 00000 1 50000 2 00000 2 50000 3 00000 3 50000 P acke ts B y tes p e r P acke t A LL 2 9916 0 82 518 526 95 466 10 279 78 1 55562 D N S 0 5071 35 9 1 76 0 0 F T P 1426 2 3 10 1063 1285 H T T P 2 0314 1 9457 145 05 420 82 157 61 740 53 S M T P 122 87 3402 39 4 64 24 4 176 22 S N M P 0 6836 1026 6 0 0 T e lne t 292 84 6835 26 7 4 81 82 5 76 0 0 16 128 256 512 102 4+

Fig. 12 Bytes per Packet Distributions

VI. SUMMARY

The corporate network traffic at the WAN uplink displays characteristics similar to Internet traffic models. The dominant network traffic is HTTP running over TCP/IP. These models can provide detailed characteristics for the data type packets for multimedia networks. The flow can also be directly imported to simulation tools to provide a higher degree of accuracy for network design and planning. These data models are used on the HyNeT to generate traffic to analyze gateway designs, flow multiplexing and bandwidth allocation prototypes.

The ITC utilized this data to analyze efficiency of a TCP gateway for high latency networks. The benefit and sizing of the TCP gateway design is dependent on the TCP connection characteristics. These characteristics were extracted from the traffic flow models. The ITC report [4] measured noticeable benefits after the data set exceeds 10-kilobytes. The data from the traffic models support the need for TCP gateways. The data from a 7-minute sampling found 196 flows were larger than 10-kilobytes, 3 were at least a megabyte. These large flows were generally associated with FTP, HTP and SMTP data transfers, suggesting that classifying network flows and routing only HTTP, FTP and SMTP flows to the gateway would also improve the gateway efficiency.

VII. REFERENCES

[1] B. Hine, P. Hontalas, T. Fong, "Lockheed Martin Corporate Traffic Estimate", Fourth Planet, Inc. Los Altos, CA, Sept 1997, unpublished.

[2] K.D. Frazer, “NSFNET: A partnership for High-Speed Networking, Final Report 1987-1995”, Merit Network Inc., 1995.

[3] Greg Miller and Kevin Thompson, "The Nature of the Beast: Recent Traffic Measurements from an Internet Backbone", paper 473, INET'98 Conference, Geneva, Switzerland, July 98.

[4] V. Bharadwaj, "Optimizing TCP/IP for Satellite Environments, Phase 1 Implementation Report", Center for Satellite and Hybrid Communication Networks, University of Maryland, College Park, 1998.

References

Related documents

SNMP Simple Network Management Protocol (SNMP) is a management protocol typically used by IT to help monitor and configure network-attached devices.. Static and InterVLAN Routing

8: Average throughput: community versus peer level (homogeneous peers, con- strained leecher upload); bigger circles represent larger proportion of rich peers, col- ors

In this paper, we advocate that a comprehensive architecture for network resilience must combine traffic analysis and anomaly detection mechanisms and

• Data gatherer - a physical device • Data analyzer Data Analyzer RMON Probe BACKBONE NETWORK SNMP Traffic SNMP Traffic LAN Router Router.. Network Management - Russell J Clark

recommending Simple Network Management Protocol (SNMP) and Remote Monitoring (RMON)

Simple Network Management Protocol (SNMP) is an application layer protocol for collecting information about devices on the network.. It is part of the Transmission

– the community name is needed for all get and set operations – the same community name may be used by different agents • SNMP authentication service. – every SNMP message from

Steffen, 12.02.2001, KSy_SNMP.ppt 17 Zürcher Hochschule Winterthur tcpConnEntry OBJECT-TYPE SYNTAX TcpConnEntry ACCESS not-accessible STATUS mandatory DESCRIPTION.