, where is the video payload bitrate for each bit-plane and the code rates are determined for the UPP_B protection strategy as previously explained. For the UPP_A case, where the entire FGS-layer is unprotected, no channel coding is added and thus , , and there is no overhead due to channel coding. To perform a fair comparison between UPP_A and UPP_B, the average packet-loss probability for UPP_B, i.e., , should be equal with the packet-loss probability of UPP_A. The PSNR plots for the UPP_A and UPP_B FGLP cases are portrayed in Fig. 12. The plots of Fig. 12 have been de- termined for a BOP of , a probability of having one or more unrecoverable lost packets within the most significant bitplane and an average packet-loss probability . (These parameters have been chosen according to the typical packet-loss rates over the Internet.) Due to the random nature of Internet packet losses and their impact on compressed video, at each tested bit-rate 50 different runs of the experiments were conducted for each FGS stream. The results portrayed in Fig. 12 show that FEC is indeed a useful mech- anism for achieving FGLP within the enhancement-layer. The FEC-based FGLP strategy is especially useful for increasing the robustness of FGS streams under high packet-losses. It is important to note that an improved PSNR performance can be obtained by determining the FEC codes that lead to the best overhead versus robustness trade off. However, the best trade off needs to be determined individually for a particular trans- mission bit-rate and average packetloss rate and is beyond the scope of this paper.
Another approach to improve the proposed unequal error protection is to send feedback regarding the current channel packetloss rates to the Pseudo Wyner-Ziv encoder, in order to correspondingly adjust the amount of parity bits needed for correcting the corrupted slices at the decoder . This approach is referred to as feedback aided unequal error protection (FBUEP). At the decoder, the current packetloss rate is estimated based on the received data and sent back to the Pseudo Wyner-Ziv encoder via the real-time transport control protocol (RTCP) feedback mechanism. This information is utilized by the Turbo encoders to update the parity data rates of the motion information and the transform coeﬃcients, which are still protected independently. At the Wyner-Ziv decoder, the received parity bits together with the side information from the primary decoder are used to decode and restore corrupted slices. These in turn are sent back to the primary decoder to replace their corrupted counterparts. It is to be noted that simply increasing the parity bits when the packetloss rate increases is not applicable, since it will exacerbate network congestion . Instead, the total transmission data rate should be kept constant, which means that when the packetloss rate increases, the primary data transmission rate should be lowered in order to spare more bits for parity bits transmission.
Parameter-level concealment methods estimate parameters in a G.729 packets using interpolation, prediction or redundant information. The most straightforward method is a packet-copy, which simply uses the most recently received packet instead of the lost packet. Wang and Gibson  proposed a method to interpolate LSF parameters in a G.729 packet to enhance the recovered speech quality. Other methods transmit redundant information with a packet to enhance the recovered speech. Most methods use error correction code such as Reed-Solomon coding . To reduce the amount of redundant bits, the unequal error protection (UEP) [11, 12] applies error-correction coding to a part of a packet or limited packets among all packets that have relatively large eﬀect on the recovered speech quality. UEP-based error concealment methods work well, but the error protection based on the error correction code does not work at all when the packetloss rate exceeds the limit of the error correction, which causes sudden and drastic degradation of speech quality.
Several studies have been made on ULP using a video deterioration modeling about a packetloss. These studies generalize the video deterioration with a macroblocks unit in a frame about a GOP. An ULP studies based on video quality modeling use the PSNR as a criterion of video quality . Considering the characteristics of wireless network, calculating the PSNR each frame is more efficient than other methods to find video quality but it needs a large number of calculations to find it. To overcome this problem, we model the video deterioration with experimental method and we use it for criterion of ULP algorithm.
routed to the PAR for buffering while the link to the PAR is already broken. These packets might be dropped by the PAR. Thus in the existing architecture for MIPv6, there is a great possibility of packetloss during handover after the disconnection of the MN from the previous network. While in the proposed architecture, as we suggest the buffering of packets for the time interval greater than the threshold value which is equal to the value of handover latency, the problem of real-time packetloss due to handover latency will be resolved completely. Thus,
Quality comparison is depicted in Fig. 6 and is presented for conventional single path routing (voice packets are sent over Path 1), round robin, random, adaptive random and redundant strategies. Based on the loss pattern specified in Table 2, round robin and random strategies have similar quality score (random offers slightly better quality), whereas adaptive and redundant are significantly better than previous two. Good quality score of adaptive strategy is directly related to the higher PLR difference of two paths. Redundant strategy is better in terms of quality than adaptive by almost completely eliminating bursts and leaving remaining voice gaps to be concealed by PLC. Comparison between adaptive random and redundant strategy shows the difference in MOS of 0.21, which is the effect of MOS and packetloss relation, since for lower P loss
Long Term Evolution (LTE) technology has become the target for most of the wireless operators moving towards Fourth Generation (4G) deployments. As user demand for mobile broadband services continues to rise, LTE and its ability to cost effectively provide very fast, highly responsive mobile data services will become ever more important. For many operators, LTE represents a significant shift from legacy mobile systems as it is the first all-Internet Protocol (IP) network technology and will impact the way networks are designed, deployed, and managed. LTE uses Orthogonal Frequency Division Multiple Access (OFDMA) and advanced antenna techniques to maximize the efficient use of radio frequency spectrum with a purely Packet Switch (PS) EPC core network. In addition, the transition to IP has enabled LTE to support Quality of Service (QoS) for real time packet data services like VoIP and video conversation. The overall goal of 4G systems is to provide a converged network compatible with the Next Generation Network (NGN).
Recently, the widespread availability of wireless communications has led to the growth and significance of wireless Mobile Ad hoc Networks (MANETs). Among the routing layer attacks, packet dropping is one of the most disruptive threats in MANETs. Thus, the malicious nodes can camouflage under the background of harsh channel conditions and reduces the detection accuracy of conventional secure routing protocols. In such circumstances, observing the packetloss rate is not adequate to accurately identify the exact cause of a packetloss. This paper proposes a Cross-layer based distributed and cooperative Intrusion Detection System (IDS) with Dempster-Shafer evidence theory (CID) system to accurately discern and eradicate the intruders using cross layer information. The CID system includes local detection engine and IDS. A local detection engine continuously monitors the network activity and differentiates the packetloss due to harsh channel conditions from the malicious one using the features of physical, MAC, and network layer. When the local detection engine detects malicious activity, it turns on IDS in a node. The IDS utilizes the Dempster-Shafer (DS) evidence theory to collect evidence only from trustworthy nodes and provides a mathematical way to merge the evidence with direct trust value in confirming the malicious activities. Eventually, the proposed CID system is extended with the AODV routing protocol, and evaluated under malicious network traffic. The simulation results show that the CID system outperforms the existing EAACK in terms of attack detection accuracy, and network lifetime.
acket loss is a very big issue in networking environment as users struggle to access same properties concurrently. Therefore, it is eminent to avoid extraordinary rate of loss during transmission of data from senders to receivers. More so, once packets are lost before getting to its required destination, resources put in place are wasted. Congestion happens when the space to be occupied by the data for processing before sending to their respective destinations is minute to carry out this task in a network. Data loss is as a result of but not limited to poor signal strength at the terminus, normal or human intrusion, unwarranted noise, hardware disaster, software exploitation, or overtaxed network nodes, protocol in use . Control can be successfully attained by allocating the signal for processing across several connections in a network. In linkages, overcrowding brings about all-inclusive
Here figure 8 is showing the analysis in terms of transmission delay over the network. As we can see, in this work the packet delay of transmission is decreased over the transmission. So that the throughput is improved. Performance measurement is done on the basis of dropped packet, packets transmission, packetloss rate, bitrate, number of bytes transmitted, packet delay.
Second use the normal data centre network topology for multicast routing design. Later it will show that the Steiner- tree algorithm  is too slow in the large size of network. To accelerate the tree calculation process, ESM leverages hierarchical regular data centre topology. To design a loop- free Bloom-Filter-based multicast forwarding engine in ESM depends on the topological characteristics of data centre networks. Third by examining that the small-sized groups are governing data centre and large- sized groups take significant bandwidth waste in in-packet Bloom-Filter-based multicast routing. For limited number of large groups, to avoid the traffic leakage multicast routing entries are installed in the switches. For the large volume of small groups, to eliminate the requirement of in-switch booting table space in-packet bloom filter is used.
Figure 1. describes an architecture of TCP Video Streaming. The Video server splits the video into streams. The Video Streams are then transmitted through TCP to the clients. There are ‘n’ clients connected to the internet. The clients are connected with varying bandwidth limits. The challenge is towards efficient video streaming with minimal loss and delay.
All these techniques have been thoroughly investigated and it is found that most of the schemes are restricted only to some specific application or network, rather than a flexible solution which controls the network traffic volume properly. Existing conventional schemes discussed in the literature cannot be considered to be completely preventive for reducing burst dropping as instead of avoiding the early destruction, these techniques adjusts the burst sending rate after the occurrence of contention or burst loss. Thus, to improve the burst loss performance of the optical network, we need to develop an efficient deflection routing contention-control method in which the network overloads are minimized or eliminated to maximize throughput rate.
As we all know, the information structure is important in the study of collective dynam- ics of a group of agents. In the original C–S model, it is assumed that each agent can sense the distance between other agents and itself. Based on this assumption, the weights can be determined. However, in a practical environment, this assumption is irrational. For exam- ple, ﬂocking ﬂying birds may encounter a natural enemy. Thus, in this case, information transmission between them gets broken. We called this phenomenon information packet- loss. In [16, 19], the authors investigated the ﬂocking behavior when the connections fail to some degree. A binary-valued random process satisfying some additional condition was usually applied to describe the uncertainty in the transmission of information. For exam- ple, a Bernoulli random process was used in , while a random graph model was used in . Thus studying the information structure in Cucker–Smale model is an important and valuable topic both from the theoretical and practical point of view.
Abstract—Concerning video transmission on the Internet, we present a model for estimating the subjective quality from objective measurements at the transmission receivers and on the network. The model reflects the quality degradation subject to parameters like packetloss ratio and bit rate, and is calibrated using the prerecorded results from subjective quality assessments. Besides the model and the calibration, the main achievement of this paper is the model’s validation by implementation in a monitoring tool. It can be used by content and network providers to swiftly localise the causes of a poor quality of experience (QoE). It also can help content providers make decisions regarding the adjustment of vital parameters, such as encoding bit rate and error correction mechanisms. We show how the estimated subjective service quality can be applied for decision making in content delivery networks that consist of overlay networks and multi-access networks.
Time scale modification (TSM) approach can be used to overcome the quality degradation caused by the packetloss regions. To conceal the lost packet, packets before the missing packet are extended such that time scale modification must preserve the pitch frequency of speech signal. Overlap and add (OLA) method is the precursor of nearly all TSM algorithms. The OLA algorithm does not analyze the content of the input signal just overlap and add the signal. Synchronous overlap and add (SOLA) algorithm is a modification of the OLA method. But SOLA does not maintain maximum local similarity. WSOLA is the technique that ensures sufficient signal continuity at segment joins that existed in original signal . The proposed work therefore emphasizes on using waveform similarity overlap and adds (WSOLA) technique to conceal the lost packet. Since WSOLA gives good quality output sound than other time scale modification algorithms . Gain controlled Waveform Similarity Overlap and Add (GWSOLA) algorithm is the modified technique of WSOLA. The gain control mechanism is used in the GWSOLA technique which adjusts the level of audio segments to be overlap added to maintain audio signal level consistent .
While the protocol’s mobility management procedure enables session continuity, it suffers three major performance limitations caused by handover delay: packetloss, service disruption; and throughput degradation as evident from the work of . The delay is mostly unavoidable due to the three required handover procedures – movement detection, duplicate address detection and care-of-address configuration – as the MN reconnects to another access link either using its active interface or a second interface. Although the data session is not broken, packets sent as the MN switches to a new point of attachment (PoA) are dropped by the previous access router (AR) since it has no way of knowing where the MN has moved to. A number of works has confirmed the negative impact of the packetloss and throughput degradation on applications. According to Biernacki , as small as 0.1% packetloss can cause TCP throughput to oscillate, which in turn affect the quality of video that is perceived by a user. Gorius et al.  reported that 0.5% loss can result in up to 25% reduction in throughput. While TCP has a mechanism to detect disconnection and slow down or stop sending packets when acknowledgements are not received, UDP-based applications will normally continue to send datagrams oblivious of the state of end-to-end links. We argue that redirecting these packets to a buffer node and forwarding the packets to the MN after handover will mitigate the impact of the disconnection on running applications.
A Mobile Ad hoc Network (MANET) is an interconnection of autonomous mobile nodes by wireless links forming dynamic topology and providing multi-hop communications without using much physical network infrastructure such as routers, servers, access points or cables or centralized administration. Each mobile node is acting as a router as well as a node. The properties of these networks make them to be very highly desirable in war zones, disaster recovery, aircraft and marine communications, industrial, home and other scenarios. The issues of MANET [1,2,3] are: (i) unpredictable link properties that expose packet collision and signal propagation, (ii) node mobility creates dynamic topology, (iii) limited battery life of mobile devices, (iv) hidden and exposed terminal problems occur when signals of two nodes are colliding with each other. (v) route maintenance is very difficult because of changing behavior of the communication medium, and (vi) lacking in security in boundaries of MANET leads to attacks such as passive eavesdropping, active
FCC compliance: This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at his own expense.