Top PDF Load Balancing in IP/MPLS Networks: A Survey

Load Balancing in IP/MPLS Networks: A Survey

Load Balancing in IP/MPLS Networks: A Survey

There are many ways to balance the traffics in a net- work. In [27] traffic sharing between multiple service paths is considered to be adopted in a hierarchical routing network. Connections with similar attributions will be assigned into several different service paths so that the network resources can be utilized more efficiently. The variable weight is also used to adjust the traffic distribu- tion. Traffic sharing and variable weight are different methods of meeting requirement of load balancing. These methods can not only settle the problems of unreasonable use of resource caused by topology aggregation and the SPF algorithm, but also reduce the blocking probability and enhance the survivability of networks. Based on this principle, a novel routing selection algorithm VWTB is proposed in [27], which is proved to yield good routing performance.
Show more

6 Read more

A Survey of Various Load Balancing Techniques and Enhanced Load Balancing Approach in Cloud Computing

A Survey of Various Load Balancing Techniques and Enhanced Load Balancing Approach in Cloud Computing

Cloud Computing became widespread in last rare year. Cloud provides stretchy software as service (SaaS),Platform as Service(PaaS),Infrastructure as Service (IaaS)[3].The cloud computing an internet based enlargement in which dynamically accessible and frequently virtualized assets are provided as a service over the internet has become a substantial dispute[3].Using internet technologies enormously ascendable IT-related competencies are provided to multiple peripheral consumers “as a Service” in the cloud computing [13].Private and Public are the two types of the cloud. Depending on the application demand cloud computing infrastructure countenance forming a capricious number of virtual machine instances. There are supplementary service that are offered from cloud apart from the basic cloud service [14].Enterprise communication elucidation to customers are outsource by communication as a service (CaaS) [15]. The CaaS vendors blameable for administration of hardware and software voice over Ip,immidate messaging service, video conferencing, soft phone, multimedia conferencing which mean all activity related to communication.
Show more

5 Read more

Implementation of Traffic Engineering in MPLS Networks by Creating TE Tunnels Using Resource Reservation Protocol and Load balancing the Traffic

Implementation of Traffic Engineering in MPLS Networks by Creating TE Tunnels Using Resource Reservation Protocol and Load balancing the Traffic

to suboptimal use of available bandwidth between a pair of routers in the service provider network. Predominantly, the suboptimal paths are under-utilized in IP networks. To avoid packet drops due to inefficient use of available bandwidth and to provide better performance, TE is employed to steer some of the traffic destined to follow the optimal path to a suboptimal path to enable better bandwidth management and utilization between a pair of routers. TE, hence, relieves temporary congestion in the core of the network on the primary or optimal cost links. TE maps flows between two routers appropriately to enable efficient use of already available bandwidth in the core of the network. The key to implementing a scalable and efficient TE methodology in the core of the network is to gather information on the traffic patterns as they traverse the core of the network so that bandwidth guarantees can be established.
Show more

6 Read more

A Survey on Load Balancing Gateway Selection Techniques in Wireless Mesh Networks

A Survey on Load Balancing Gateway Selection Techniques in Wireless Mesh Networks

Wireless mesh networks (WMN) have emerged as a key technology for next generation wireless networks, showing rapid progress and inspiring numerous applications. WMNs seem significantly attractive to network operators for providing new applications that cannot be easily supported by other wireless technologies. The major incentives for the deployment of wireless mesh networks come from their envisioned advantages: extended coverage, robustness, self configuration, easy maintenance, and low cost. The major characteristics of WMN is its self-organizing and self configurability properties[3], which enables the nodes in the mesh network to instinctively establish and maintain connectivity within themselves.
Show more

7 Read more

Survey on Load balancing protocols in
MANET’S (mobile ad-hoc networks)

Survey on Load balancing protocols in MANET’S (mobile ad-hoc networks)

This protocol takes the intermediate nodes load as a metric for selection of the route and then it detects the status of the routes that are active to construct the paths when the nodes of the route have overloaded interface queue. The RRQ (route request ) packet is flooded to the discovery of the route from source. DLAR dynamically builds the route when in case there is no information about the destination node. It follows the backward learning that is it records the <source,destination> node and searches for the previous hop. The load information is attached to the RRQ packet. The destination now at this time waits for some time to learn about all possible routes. This protocol does not facilitate REQUEST REPLY to be send from that routes. In case the REQUEST REPLY is also send then the congestion will take place. DLAR does not allow congestion and so does not allow sending of REPLY via that route. When the active sessions occur the nodes piggyback their load information on the data packets. These data packets now have the information which the destination receives and so the destination comes to know whether the path is congested or not. In case the path is congested, a new and light path is constructed so that the data packet can be send safely over that route to the destination without congestion.
Show more

6 Read more

Survey of Load Balancing in Cloud Computing

Survey of Load Balancing in Cloud Computing

Cloud Computing denotes to operating, organizing, and retrieving the services over the internet. It offers online services. Cloud computing is the provision of computing facilities using the Internet. Cloud facilities permit persons and industries to use software and hardware that are accomplished through third revelries at distant locations. Instances of cloud amenities comprise online file storing, social networking sites, webmail, and online commercial presentations [3]. The cloud computing prototypical permits right to use to info and computer resources from wherever that a network association is accessible. Cloud computing delivers a common group of resources, together with data storage space, networks, computer processing power, and dedicated commercial and customer requests.
Show more

5 Read more

Load Balancing the Network Traffic in the Nth Mode of IP Tables

Load Balancing the Network Traffic in the Nth Mode of IP Tables

Hayian et al (2009) proposed multi-ISP load balancing optimization model based on BP neural networks in order to solve the problem of manual strategy choice when they use the campus network of multiple internet service provider (ISP). The establishment of network topology structure was discussed in this paper and a detailed description about the data and processing were given. The effectiveness of the model was verified by the experimental results 5 .

7 Read more

Teleprotection signalling over an IP/MPLS network

Teleprotection signalling over an IP/MPLS network

As previously stated, in a survey conducted by RAD Communications, close to 50% of respondents had either already started the migration to packetised communications or were going to within the next 24 months. Companies in this position must be fully aware of the functional difference between PSNs and legacy networks and their relevant performance limitations. The key difference between conventional TDM based communications networks and packet based networks is the use of the OSI (Open Systems Interconnection) model which appears as the 7 layer stack discussed in earlier in Section 3.1. The stack allows a logical transfer of data from the application (i.e. the protection device functional logic) to the physical medium being Ethernet (over copper or fibre in the packetised environment). As data packets (a formatted unit of data) are generated and moved down the stack, a number of headers may be added at each layer to facilitate various functions ranging from connection between remote and local functions, dependability enhancements (through transport protocols) to routing between devices (seen in the network layer e.g. IP or Internet Protocol). These headers aren’t present in every instance of communications within a PSN, depending on the format and technologies used to implement them. The varying approaches taken in implementing Packetised networks present vastly different results regarding performance and capability to support time-critical applications. CIGRE (the International Council on Large Electric Systems) present various implementations of packetised networks including (CIGRE – Line and System Protection using digital Circuit and Packet Communications, 2012):
Show more

153 Read more

A Survey of Load Balancing Congestion Control in MANET

A Survey of Load Balancing Congestion Control in MANET

Abstract— The temporary connection formation and data delivery in motion is perform in Mobile Ad hoc Network (MANET). MANET is accomplished of forming a temporary network, without the required of a critical administration or commonplace aid gadgets available in a conventional network, therefore forming an infrastructure- less network. With a purpose to assurance for the future, the ad hoc networks establish the networks all over. Congestion, the most important rationale for a link ruin is brought on due to excessive load on the community, additional results in failure of nodes and topology change in the network. The immoderate load on the nodes cause the buffer overflow that further result in the packets being dropped. This result in packet prolongs and impacts the packet delivery ratio of MANET protocol. Load balancing is a way to preclude congestion in the community. If the burden is balanced then it's going to furnish powerful use of the network and diminish packet lengthen and enhance packet supply ratio. Transferring load of congested route to much less congested routes improves total network performance. Advert hoc On-Demand Multipath Distance Vector (AOMDV) selects a route with a lower hop rely and discards routes with larger hop depend. In this paper we gift the survey of congestion manage routing tactics to discover and prevent the probability of congestion in MANET. The study of AOMDV protocol routing procedure, congestion and previous research work.
Show more

6 Read more

Performance Evaluation of Traffic Engineering Signal Protocols in IPV6 MPLS Networks

Performance Evaluation of Traffic Engineering Signal Protocols in IPV6 MPLS Networks

tion Internet Protocol (IP) based backbone networks [4]. MPLS networks can offer the Quality of Service (QoS) guarantees that data transport services like frame relay (FR) or Asynchronous Transfer Mode Switching (ATM) give, without requiring the use of any dedicated lines. MPLS was devised to convert the Internet and IP back- bones from best effort data networks to business-class transport mediums capable of handling traditional real time services [5]. The initial trust was to deliver much needed traffic engineering capabilities and QoS enhance- ments to the generic IP cloud. The availability of traffic engineering has helped MPLS reach critical mass in term of service provider mind share and resulting MPLS de- ployments. Advantages accrue primarily to the carriers, User benefits include lower cost in most cases, greater control over networks, and more detailed QoS. The con- straint-based routing label distributions protocol (CR- LDP) and the resource reservation protocol (RSVP) are the signaling algorithms used for traffic engineering. In this paper, a comparative study of the performance MPLS TE signal protocols is presented. The paper also shows the performance enhancement of MPLS networks over conventional IP networks. MPLS is improved net- work performance for multimedia type application in heavy load traffic environment. The rest of the paper is organized as follows. In Section 2, a brief reference to related works has been presented. Section 3 describes traditional IP network and MPLS network operation along with the important terms associated with MPLS. In Section 4, traffic engineering signal protocols of MPLS
Show more

8 Read more

SURVEY ON LOAD BALANCING IN MOBILE CLOUD COMPUTING

SURVEY ON LOAD BALANCING IN MOBILE CLOUD COMPUTING

Mobile devices moved from one place to another place. So the dynamic cloud selection algorithm must be used for selecting cloud or cloudlet. Mobile offload it data on cloud or cloudlet. Cloud load balancing is the process of distributing computing resources and workloads in a cloud computing environment. Load balancing permits enterprises to manage application or workload demands by allocating resources among servers, multiple computers and networks. Cloud computing supports to share data and deliver many resources to clients. Client has to pay only for those resources as much them utilized. Cloud computing stores the data and distributed resources in the open location. The amount of data storage rises quickly in open environment. So, load balancing is a main issue in cloud environment. Load balancing is used to distribute the dynamic workload over multiple systems to confirm that no single system is overloaded. It helps in appropriate utilization of resources .It also increase the performance of the whole system.
Show more

6 Read more

Load Balancing in Mobile Ad Hoc Networks: A Survey

Load Balancing in Mobile Ad Hoc Networks: A Survey

Pham and Perreau conducted a performance analysis [15]. The authors of [15] provided some insight into choosing the right trade-off between increased overheads and better performance. A novel end-to-end approach for achieving the dual goal of enhanced reliability under path failures, and multi-path load balancing in mobile ad hoc networks (MANETs) is proposed by Argyriou and Madisetti in [16]. The authors of [16] achieved their objective by fully exploiting the presence of multiple paths in mobile ad hoc networks in order to jointly attack the problems of frequent route failures and load balancing. In [17] Chakrabarti and Kulkarni modified the way to construct alternate routes that are maintained and used in DSR. In routing protocol proposed in [17] load balancing is done among the number of alternate routes. The approach in [17] also enabled to provide QoS guarantees by ensuring the appropriate bandwidth which is available for a flow even when nodes are under mobility. Souinli et.al [18] proposed load-balancing mechanisms that push the traffic further from the center of the network. They provided a novel routing metrics that take into account nodes degree of centrality, for both proactive and reactive routing protocols.
Show more

6 Read more

Distributed Offline Load Balancing in MapReduce Networks

Distributed Offline Load Balancing in MapReduce Networks

In this paper we present a distributed algorithm to address the problem of balancing the data workload assigned to mappers in massively large-scale clusters that consist of several tens of thousands of nodes [9]. Our goal is to address the load-balancing problem in heterogeneous networks of clusters before a job commences to process data. In this way, we avoid the increased network communication overhead resulting from balancing the load at run-time, as measured by [6], and allow for advanced data transfers to nodes prior to execution. Our algorithm works in the following way. When a new MapReduce job arrives, the mappers com- mence to exchange coordinating information locally (with neighboring mappers with which they have a communication link established) about their total job workload demand and capacity, as shown in Figure 1. More specifically, a ratio consensus algorithm is deployed which enables (synchronous and asynchronous) asymptotic convergence to proportional balance among mappers of the data to be processed, in a completely distributed fashion. With proportional balancing, each mapper is assigned workload proportional to its re- source availability (which in general could be time-varying). In this way, we expect all mappers to finish processing simul- taneously with minor variations, thus preventing increased processing time due to imbalances.
Show more

7 Read more

Evaluating the Impact of Routing and Queuing Techniques on the QoS of VoIP Over IP/MPLS Networks

Evaluating the Impact of Routing and Queuing Techniques on the QoS of VoIP Over IP/MPLS Networks

The network that delivers QoS is a network that keeps certain level of delivery of number of packets. Ensuring high quality for a voice call over the “best effort” IP network is the crucial task in transporting VoIP traffic. However, QoS demonstrates a systematized collection of approaches that are used to control network bandwidth, end-to-end delay, jitter, and packet loss to achieve an appropriate throughput to the network to fulfil real-time multimedia streaming requirements. QoS for VoIP is defined using different parameters; the common factors are end-to-end delay, delay variation (jitter) and packet loss ratio. Hence, the main goal of QoS is to control the delay jitter, one way end-to-end delay and to provide a sufficient bandwidth to deliver the real-time multimedia traffic (e.g. VoIP) effectively within the delay times acceptable limits.The delay Jitter is defined as the variability of consecutive packets delays within the same packet stream at arrival to the receiving end; i.e. the inter arrival times between successive packets at the receiver cannot be greater than a definite value as this will cause the interruption of the continuity of the play-out process due to missing of data at correct play-time because of the variability of transmission delay.A packet loss is defined as the number of packets that are lost during transmission process inside the network within a defined time period. However, packet loss may occur due to: network congestion, lower layers errors, network element failures or due to the end application errors.End-to-end delay is defined as the time it taken to deliver a packet from the sender to the receiver. For real-time voice conversation (VoIP), the end-to-end delays between 150 and 400 msecs is acceptable but are not perfect. Typically, the receiver of a VoIP call will define certain threshold (e.g. 400 msecs); any packets that are delayed more than that threshold will be discarded [8] [9] [10].
Show more

10 Read more

A SURVEY ON LOAD BALANCING ALGORITHMS IN CLOUD COMPUTING

A SURVEY ON LOAD BALANCING ALGORITHMS IN CLOUD COMPUTING

Fig 2. shows a framework of under which load balancing algorithms work in a cloud computing environment. Cloudlet submits tasks to Job Manager and then Job Manager gives all jobs to Load Balancer. Load Balancer applies Load Balancing Algorithm for Submitted Tasks and schedule all task such that each Virtual Machine will get equal number of task for execution.

6 Read more

A Survey on Load Balancing Algorithms in Cloud Environment

A Survey on Load Balancing Algorithms in Cloud Environment

Existing Load balancing algorithms have some drawbacks in improving overall performance of the cloud environment. Still there is a problem of overloading nodes in the Cloud environment. It is very difficult to manage entire cloud environment. Hence the proposed idea is to divide the entire cloud environment into several partitions based on its geographical locations [1]. Now the Load balancing algorithm can be applied only to the partitions, not to the entire cloud. Fig 2 shows the cloud environment after partitioning is done. The load balancing algorithm is applied to each partition in order to avoid overloading of nodes.
Show more

5 Read more

A Survey on Load Balancing Challenges In Cloud Environment

A Survey on Load Balancing Challenges In Cloud Environment

Equally spread current execution load: This algorithm requires a load balancer which monitors the jobs which are asked for execution. The task of load balancer is to queue up the jobs and hand over them to different virtual machines. The balancer looks over the queue frequently for new jobs and then allots them to the list of free virtual server. The balance also maintains the list of task allotted to virtual servers, which helps them to identify that which virtual machines are free and need to be allotted with new jobs. The experimental work for this algorithm is performed using the cloud analyst simulation. The name suggests about this algorithm that it work on equally spreading the execution load on different virtual machine.
Show more

5 Read more

Survey of Load Balancing Methods in Cloud Computing

Survey of Load Balancing Methods in Cloud Computing

balancing algorithm based totally on ant colony and complex network concept (ACLB) in an open cloud federation this method brings into utility small international and scale free traits of a complicated community to attain stronger load balancing this technique overcomes heterogeneity it adapts to dynamic environments, has excellent fault tolerance and may be very solid and enhances the general overall performance of the device

6 Read more

A Survey on Load Balancing Techniques in Cloud Computing

A Survey on Load Balancing Techniques in Cloud Computing

Markus Esch and Eric Tobias addressed a concept for self organized load balancing scheme. They provide an algorithm for the decentralized construction of a scale free link structure interconnecting network nodes. In this load balancing scheme a hyper verse infrastructure relies on two tier architecture. The real backbone of this type of technology is public servers. Public servers are loosely interconnected architecture p2p overlays that are used for torrent based data distribution. For self organized load balancing of server node, a world surface is divided into the small cells. Each cell managed public servers. The management of machines is based on crowded area for high crowded area there are many more powerful machines and for less crowded area there are less powerful machine. By the self organize behavior position of servers they are managed virtually. Scale free link structure algorithm is used for public servers to link them in a connected fashion, here we established a link together. So the new node can connected to an existing node with a predefine probability here global load is to be handle. So it’s an adaptive self organizing method for load balancing using cell dividing method managed by global server. In this technique here fast routing in scale free network. Main advantage of this method is sort average path length and resilience against random node failure. It also support for fault tolerance.
Show more

5 Read more

Review on QoS Improvement with MPLS Mechanism in NGN

Review on QoS Improvement with MPLS Mechanism in NGN

The greater QoS is not only needed convey or increase certain traffic but also to convey it as soon as possible and for that proper management of resources is to be done. Since last several years various methods of engineering of traffic for IP network have been specified. A. saika and R. E. Kouch in [8] proposed the mechanism that is Multi Protocol Label switching (MPLS) with its supporting platform IP Multimedia Subsystem (IMS) to provide guarantee of QoS from beginning to end. Multiprotocol label switching (MPLS) networks require dynamic flow admission control to guarantee end-to-end quality of service (QoS) for each Internet protocol (IP) traffic flow. In this paper Desire Oulai in [9] proposed to tackle the joint routing and admission control Problem for the IP traffic flows in MPLS networks without rerouting already admitted flows. They proposed two mathematical programming models for this problem. The first model includes end-to-end delay constraints and the second one, end-to-end packet loss constraints. The objective function of both models is to minimize the end-to-end delay for the new flow. Results showed that the mean end-to-end delay has been significantly reduced.
Show more

8 Read more

Show all 10000 documents...