A
n Inter-Frame De-Jittering Scheme for Video Streaming
over Mobile Communication Networks
Tsang-Ling Sheu and Po-Wen Lee Department of Electrical Engineering
National Sun Yat-Sen University Kaohsiung, Taiwan sheu@ee.nsysu.edu.tw
Abstract: - In a mobile communication network, jitter may exist among video frames when network
congestion occurs during the transmission from a video server to the last-hop base station (BS). In this paper, we propose an inter-frame de-jittering (IFDJ) scheme, containing two waiting models: static waiting (SW) and dynamic waiting (DW). SW can restore the original frame gap at BS by gathering payloads belonging to the same video frame. However, queuing length in BS could rapidly increase. Alternatively, DW can adaptively adjust the waiting time of a video frame according to the feedback queuing lengths averaged by a mobile station (MS). We perform NS-2 simulation to evaluate IFDJ. Simulation results show that DW not only minimizes the inter-frame jitter, but it also significantly decreases the total queuing length in BS and MS. Additionally, the total packet loss rate is greatly reduced.
Key-words:- jitter, PLR, video streaming, mobile networks, NS-2.
1 Introduction
The increasing popularity in video on demand, such as YouTube, and video conferencing, such as Skype, over a mobile communication network has fashioned researchers to focus more on how to improve video quality at a mobile station (MS). When a video stream delivered from a video server to its mobile receiver [1], network congestion could occur before the stream reaches its last hop, the base station (BS). Network congestion may significantly increase video frame jitter (VFJ), which in this paper is defined as the time difference of inter-frame gap between the sender and the receiver. The increasing VFJ may generate buffer overflow or underflow at a mobile receiver. Consequently, the presentation of video quality could be seriously degraded.
Previous researches on improving video quality over a wireless and mobile communication network [2, 3] include three aspects, resource allocation in OFDMA, scheduling of video frames, and de-jittering of a video stream. The first aspect focuses on the allocation of resource blocks (RB) in OFDMA.
For examples, given a GOP (Group of Picture) and a picture size, Abadennebi, et al. [4] proposed a traffic prediction model, with which an adequate number of RB can be reserved; yet, the authors did not consider delay constraints and video frame jitter. To effectively reduce the waiting time in a BS, Han, et al. [5] and Chen, et al. [6] proposed a weighted deadline model, which approximately computes the number of downlink OFDMA bursts required for an I-frame. Huang, et al. [7] proposed a frame-based adaptive bandwidth allocation, which utilizes weighted fair queuing to calculate how many OFDMA frames required for one GOP. However, buffering one GOP in BS may adversely consume too much storage. In the second aspect, Wang et al. [8] proposed a priority-based EDF (Earliest Deadline First) algorithm, which intelligently assigns different weights to I, B, and P frames, respectively. Similarly, Esmailpour et al. [9] combined EDF and WFQ (weighted fair queue), which dynamically allocates bandwidth to different priorities of video streams. Since the channel quality at an MS may change frequently along with different locations and time, a cross-layer design, which considers both RB allocations
and video frame scheduling, is proposed by Haghani, et al. [10]. Similarly, to prevent from any dropping of I-frames, Liang, et al. [11] developed a two-tier scheduler and a bucket-based burst allocator to minimize the overhead of allocating RB to real-time and non-real-time traffic. The third aspect studies how to effectively minimize jitter between a video server and a mobile receiver over a wireless mobile network. For examples, to minimize jitter among video frames, Khan, et al. [12] proposed a congestion notification scheme, with which an MS can dynamically adjust its decoding buffer to adapt to its transmission bit rate. Huang, et al. [13], proposed a temporal presentation length (TPL) based on the historical delay and the average jitter at an MS. From TPL, an MS can determine whether or not a de-jittering process should be invoked.
Unlike the previous work, in this paper, we propose an inter-frame de-jittering (IFDJ) scheme on a wireless mobile network. The proposed IFDJ compares the performance of two different waiting models: static waiting (SW) and dynamic waiting (DW). In SW, a base station could restore the original video frame gap by gathering the payloads belonging to the same video frame. However, the time spent in counting the payloads in BS could rapidly increase the queuing length. As an alternative, we propose DW model, which can adaptively adjust the waiting time of a video frame at BS through the feedback queuing lengths averaged by an MS.
The remainder of this paper is organized as follows. In Section 2, we introduce the video frame jitter incurred in a wireless mobile network. In Section 3, two different waiting models, the SW and the DW, are introduced. In Section 4, comprehensive simulation using NS-2 is performed and the simulation results are discussed. Finally, this paper is concluded in Section 5.
2 Video Frame Jitter in a Mobile Network
Figure 1. Video streaming on a mobile network As illustrated in Figure 1, multiple video streams (stream 1 to stream n) are requested from a video server by multiple mobile stations (MS 1 to MS n), respectively. The requested video streams are delivered through an Internet cloud and finally reach a mobile communication network (MCN). A last-hop base station (BS) in the MCN then allocates resource blocks (RBs) for these video streams from an OFDMA (orthogonal frequency-division multiple access) downlink frame. Over the wireless link, stream 1 to stream n are received and decoded by MS 1 to MS n, respectively.
Figure 2. Video frame jitter (VFJ) A video stream, when it is delivered over an Internet cloud, may not be decoded very smoothly on an MS. In other words, network congestion incurred unexpectedly in a router could degrade video quality seriously during
playback. A video stream consists of a sequence of GoP (group of pictures) and each GoP has three different picture types, the I-frame, B-frame, and P-frame. The time gap between any two adjacent frames is determined by the reciprocal of FPS (frames per second). As shown in Figure 2, when the IP packets of a video stream arrive at an MS, they are reassembled to a sequence of video frames according to their timestamps. However, due to network queuing and congestion, the inter-arrival time of those arriving IP packets may not be a constant. In this paper, we define video frame jitter (VFJ) as the time difference of inter-frame gap between the sender and the receiver. For example, from Figure 2, we observe that VFJ is equal to zero between video frames n and n+1, while VFJ is greater than zero between video frames n+2 and n+1. When VFJ is not equal to zero, some video frames may not be decoded and presented at the rate of 1/FPS. Consequently, the decoding buffer could be underflow or overflow. Buffer underflow will freeze a picture for a while and overflow will miss out some pictures.
3 Inter-Frame De-Jittering
Figure 3. Functional blocks of at BS and MS To minimize VFJ, we propose an inter-frame de-jittering scheme (IFDJ) for real-time video streaming over a mobile communication network. The functional blocks of IFDJ at BS and MS are illustrated in Figure 3. Basically, IFDJ consists of three major functions: En-queue process, OFDMA scheduling, and En- queue-lengths feedback. The first two functions are
performed by a BS and the third one is performed by an MS.
3.1 En-queue Process
En-queue process (EQP) is in charge of computing three parameters for a video stream: original gap (OG), instantaneous gap (IG), and frame size (FS). As shown in Eq. (1), OGi,
where i denotes a stream ID, is the original inter-frame gap when a stream is delivered by a video server. It can be calculated from FPS.
n i FPS OG i i , 1 to 1 = = (1) The second parameter, IGi,j, where i is the
tream ID and j is the frame sequence number, represents the inter-frame gap when a video stream arrives at a BS. IGi,j can be calculated
from Eq. (2), where EQTi,jdenotes the en-queue
time of frame j in video stream i.
IGi,j =EQTi,j −EQTi,j−1 (2)
The third parameter, FSi,j, is the size of the j-th frame (or picture) in video stream i. FSi,j
can be calculated from Eq. (3), where Pk
denotes the payload bytes of the k-th packet. Here we assume a video frame j consists of l IP packets.
∑
= = l k k j i j i P FS 1 , , , (3) 3.2 Queue-Lengths FeedbackTable 1. Queue status at an MS Increment IF condition min=min+1 Q/B≤THmin
mid=mid+1 THmin<Q/B≤THmax
max=max+1 Q/B≥THmax
An MS of the proposed IFDJ will execute queue-length feedback (QLF) function, which computes the ratio of queuing length (Q) and buffer size (B) for every OFDMA frame. Two
buffer thresholds, THminandTHmax, are defined. As shown in Table 1, there are three status of the ratio, minimum (min), middle (mid), and maximum (max). The three ratios are constrained by three IF conditions determined by the three inequalities of Q/B, THmin, andTHmax. If a condition is met, either min, or mid, or max is incremented by one.
3.3 OFDMA Scheduling
OFDMA scheduling (OFS) is responsible for allocating resource blocks (RBs) for every OFDMA frame. Here, we define Wi as the
waiting time in terms of how many OFDMA frame duration (OD) for stream i. Eq. (4) shows the waiting time for different inter-frame gaps, the short and the long, respectively. This waiting model simply considers video inter-frame gaps at BS. It does not consider the variations of queuing at MS. Thus, it is referred to as static waiting (SW) model. ) 4 ( gap long if gap short if _ − = OD OG OD IG OG W i j i i i ) 5 ( / if / TH if / if max 2 2 1 , max min 1 , min 1 2 1 , , ≥ × + ≤ < ≤ × − = − − − TH B Q C OD OG W TH B Q W TH B Q C OD OG W W i t i t i i t i t i
On the other hand, by considering the ratio of queuing length and buffer size (Q/B) at an MS, a dynamic waiting (DW) model can compute the waiting (Wi,t) in Eq. (5), where C1
and C2 , respectively, denote the incremented
number of min and max reported from an MS. Notice that since OGi is smaller than one, the
reason of taking the square on OGi is to reduce
the influences of C1and C2.
3.4 Implementations with Pseudo Codes The implementations of the proposed IFDJ with three functions, EQP, QLF, and OFS, are shown in Figures 4, 5, and 6, respectively. /***********En-QueueProcess************/
void Connection::enqueue (Packet * p) // a packet arrives if(pre_sendtime_ == 0){
frame_gap_ = 0;
pre_enqueueTime_ = NOW; //record enqueue time original_gap_ = 0;
pre_sendtime_ = HDR_CMN(p)->sendtime_;
inserNode(HDR_CMN(p)->size(), frame_gap_); //enqueue }
else if(HDR_CMN(p)->sendtime_ != pre_sendtime){ //packet
belongs to a different video frame
original_gap_ = HDR_CMN(p)->sendtime_ - pre_sendtime_; //compute OG
pre_sendtime_ = HDR_CMN(p)->sendtime_;
frame_gap_ = NOW - pre_enqueueTime_; //compute IG pre_enqueueTime_ = NOW; //record enqueue time inserNode(HDR_CMN(p)->size(), frame_gap_);
}
else{ //packet belongs to the same video frame addsize(HDR_CMN(p)->size()); //compute picture size
} }
queue_->enque (p); //place to BS queue type = peer->getOutData()->check_gapType(); switch(type){
case 's': //short gap short_gap_process();
case 'c': //constant gap constant_process(); case 'l': //long gap
long_gap_process(); case 'd': //delay a video frame
delay_process(); } else // not video stream
enqueue(p); }
Packet * Connection::dequeue () //dequeue process {
Packet *p = queue_->deque (); //packet dequeue
if(strcmp(packet_info.name(HDR_CMN(p)->ptype()), "video") == 0){
pre_dequeueTime_ = NOW; //record current time printf("dequeue %f size %d type %s framegap %f sendtime %f \n", NOW, HDR_CMN(p)->size(),
packet_info.name(HDR_CMN(p)->ptype()), frame_gap_, HDR_CMN(p)->sendtime_);
} return p;
---END of En-Queue Process--- Figure 4. En-queue process at BS
/***Queue Length Feedback**************/
void Mac_SS::receive () //a packet arrives at MS { // determine whether it belongs to a video stream if(cid < 65535 &&
strcmp(packet_info.name(HDR_CMN(pktRx_)->ptype()), "video") == 0){
con->set_SS_buffer(con->get_queue_length()); //buffer size }
printf("# cid %d dequeue %d NOW %f sendtime %f\n", cid, con->get_queue_length(), NOW, HDR_CMN(pktRx_)->sendtime_);
con->SS_dequeue (cid); // send packets to upper layer } //printf("# %f\n", con->getOriginalgap ()) //get original gap } }
void Mac802_16SS::sendDown(Packet *p) //Q/B ratio {
if (connection->queueLength ()==macmib_.queue_length) { //if queue is full
update_watch (&loss_watch_, 1); drop (p, "QWI"); }
else if (connection_rec != NULL && connection_rec->get_SS_buffer() > 0){
hdr_mac_Hdr = HDR_MAC(p); //feedback Q/B ratio
---End of Queue Length Feedback--- Figure 5. Queue length feedback at MS /*******OFDMA Scheduling *************/
void BSScheduler::schedule () { //sort MS by Q/B ratio
Sqequence_peer(mac_->getPeerNode_head());
//begin the scheduling
while(dlduration < maxdlduration && num_of_peer < mac_->getNbPeerNodes()){
if (peer->getOutData() &&
peer->getOutData()->queueByteLength()>0 && dlduration < maxdlduration) { type = peer->getOutData()->check_gapType(); //stream gap if(1 == peer->getOutData()->get_IsDelay()) //delay stream type = 'd';
switch(type){ case 's': //short gap
if(peer->getOutData()->getOFDM_num() == -100) //compute the waiting time
peer->getOutData()->calOFDM_num(mac_->getFrameDuration());
if(peer->getOutData()->getOFDM_num() <= 0){ //allocate RB in OFDMA downlink subframe dlduration = addDlBurst_framesize(nbdlbursts++,
peer->getOutData(), peer->getDIUC(), dlduration, maxdlduration, peer->getOutData()->currentNodeSize()); peer->getOutData()->resetOFDM_num(); }
case 'c': //constant gap
dlduration = addDlBurst (nbdlbursts++,
peer->getOutData(), peer->getDIUC(), dlduration, maxdlduration);
//allocate RB in
OFDMA downlink subframe peer->getOutData()->resetOFDM_num();
break; case 'l': //long gap
if(peer->getOutData()->get_Isbuffering() == 1){ if(peer->getOutData()->currentNodeNo() >= targetBuff && (peer->getOutData()->get_bufferSending() == 0)){ peer->getOutData()->set_bufferSending(1); peer->getOutData()->set_currentBuff(targetBuff); }
//function to compute the waiting time
void Connection::calOFDM_num(float duration) {
if(frame_gap_ <= original_gap_){
OFDM_num = (int) round((original_gap frame_gap)/duration); printf("#original %f framegap %f duration %f
OFDM_num %d\n", original_gap_, frame_gap_, duration, OFDM_num); } else{ OFDM_num = 0; } ratio = get_SS_ratio();
if(currentNodeNo() >= 20 && OFDM_num != 0){ if( ratio < 20){ // Q/B ratio < THmin
if(MS_status != 1) counter_one = 0;
counter_one++; //increment C1
OFDM_num = OFDM_num -(counter_one*original_gap_)* original_gap_/duration;
printf("ms %d calOFDMA min %d pre %d\n",ms , OFDM_num, counter_one);
MS_status = 1;
OFDM_num_pre = OFDM_num; }
else if(ratio > 80){ // Q/B ratio > THmax if(MS_status != 3){
counter_two = 0; }
printf("ms %d calOFDMA max %d pre %d\n",ms ,OFDM_num, counter_two); counter_two++; //increment C2
//compute the waiting time
OFDM_num = OFDM_num +
(counter_two*original_gap_)*original_gap_/duration; OFDM_num_pre = OFDM_num;
MS_status = 3; }
else{ // THmin < Q/B ratio < THmax
printf("ms %d calOFDMA mid %d \n",ms ,OFDM_num); OFDM_num = OFDM_num_pre; //compute the waiting time MS_status = 2; }
}
---End of OFDMA Scheduling--- Figure 6. OFDMA scheduling at BS
IV. SIMULATION AND DISCUSSIONS In this Section, simulation using NS-2 is performed. Figure 7 shows the simulation topology with 8 nodes. The node ID and their functions are shown in Table 2.
Figure 7. Simulation topology
Table 2 Node ID and Function
Node ID Function 0 Video server 1 Router 1 2 Router 2 3 Background traffic 4 BS 5 FPS = 30 for MS 1 6 FPS = 40 for MS 2 7 FPS = 50 for MS 3
deo frame jitt (a) VFJ using Round Robin ure 7. Simulation topology
2 Node ID and Function
VFJ using Round Robin
(b) VFJ using IFDJ
Figure 8. VFJ between RR and IFDJ
By assuming background traffic equals 20 Mbps, Figures 8(a) and 8(b) show the variations of video frame jitter (VFJ) using
(RR) and the proposed IFDJ, respectively. As it can be observed, if RR is used, VFJ of the three MS fluctuates largely between 0.01 sec to 0.06 sec, while if IFDJ is applied, VFJ of the three MS varies quite smoothly just between 0.02 sec
to 0.04 sec. The larger fluctuations of VFJ at BS implies the worse quality in video decoding at MS. This significant reduction in VFJ is because that the proposed IFDJ can effectively absorb jitter at BS. In other words, VFJ is reduced at BS
since the proposed IFDJ can differentiate the
inter-video frame gaps between long and short.
Figure 9. Packet loss rate
using IFDJ
Figure 8. VFJ between RR and IFDJ
By assuming background traffic equals 20 Mbps, Figures 8(a) and 8(b) show the variations of video frame jitter (VFJ) using round-robin
(RR) and the proposed IFDJ, respectively. As it can be observed, if RR is used, VFJ of the three MS fluctuates largely between 0.01 sec to 0.06 sec, while if IFDJ is applied, VFJ of the three MS varies quite smoothly just between 0.02 sec .04 sec. The larger fluctuations of VFJ at BS implies the worse quality in video decoding at MS. This significant reduction in VFJ is because that the proposed IFDJ can effectively absorb jitter at BS. In other words, VFJ is reduced at BS d IFDJ can differentiate the video frame gaps between long and short.
To further study the effectiveness of
reducing VFJ, in Section III
-presented two waiting models, the static waiting
(SW) in Eq. (4) and the dynamic waiting (DW) in Eq. (5). The total packet loss rate (PLR) versus the total buffer size between SW and DW are shown in Figure 9. Notice that the total buffer size consists of the buffer size at MS and the buffer size at BS. The tot
computed by summing up the packet loss at BS and the packet loss at MS. From Figure 9, we can observe that, no matter which MS, the total PLR of SW is relatively larger than that of DW. This is because DW can compute adequate
waiting time for OFDMA frames based on the
queue-lengths feedback (QLF) from MS. In fact, QLF message contains the ratio of queuing
length and buffer size (Q/B) at MS. A smaller
Q/B ratio implies that less waiting time at BS
and a larger Q/B ratio indicates that a video
frame can be delayed for a little bit longer time.
Figure 10. QL between SW and DW at BS It is known that the amount of buffer allocations is determined by the average queuing length. Thus, to estimate the buffer size required for BS and MS, in the last exper we generate three video streams (streams 1, 2, and 3) from node 0 and the three streams are delivered to three receivers, respectively. Figure 10 shows the variations of queuing lengths at BS versus the simulation time. It is observed
that the queuing lengths of DW are significantly smaller than those in SW. Notice that a huge reduction in queuing length occurs in stream 3; at simulation time 27.5 sec, the queuing length To further study the effectiveness of -C, we have
presented two waiting models, the static waiting n Eq. (4) and the dynamic waiting (DW) in Eq. (5). The total packet loss rate (PLR) versus the total buffer size between SW and DW are shown in Figure 9. Notice that the total buffer size consists of the buffer size at MS and the buffer size at BS. The total PLR is
computed by summing up the packet loss at BS and the packet loss at MS. From Figure 9, we can observe that, no matter which MS, the total PLR of SW is relatively larger than that of DW. This is because DW can compute adequate MA frames based on the lengths feedback (QLF) from MS. In fact, QLF message contains the ratio of queuing
) at MS. A smaller ratio implies that less waiting time at BS ratio indicates that a video can be delayed for a little bit longer time.
en SW and DW at BS
It is known that the amount of buffer allocations is determined by the average queuing length. Thus, to estimate the buffer size required for BS and MS, in the last experiment
we generate three video streams (streams 1, 2, and 3) from node 0 and the three streams are delivered to three receivers, respectively. Figure 10 shows the variations of queuing lengths at BS versus the simulation time. It is observed g lengths of DW are significantly smaller than those in SW. Notice that a huge reduction in queuing length occurs in stream 3; at simulation time 27.5 sec, the queuing length
in SW increases from 500 to 900 packets, while the queuing length in DW decrease
50 packets. This huge reduction in queuing lengths achieved by DW is because that in DW the waiting time for OFDMA frames can be adaptively adjusted based on the
feedback from MS.
Figure 11. QL between SW and DW at MS
To the contrast, as shown in Figure 11, although DW can significantly reduce the queuing lengths at BS, the queuing lengths at MS are relatively longer than those in SW. However, the increase scale in MS is quite smaller (only tens of packets), as it is compared to the reduction scale (hundreds of packets) in BS. Thus, as shown in Figure 12, the reduction of the total queuing length (summing up the queuing lengths in BS and MS) achieved by DW still exhibits a significant improvement.
Figure 12. Total QL between
in SW increases from 500 to 900 packets, while the queuing length in DW decrease from 350 to
50 packets. This huge reduction in queuing lengths achieved by DW is because that in DW the waiting time for OFDMA frames can be adaptively adjusted based on the Q/B ratio
etween SW and DW at MS
contrast, as shown in Figure 11, although DW can significantly reduce the queuing lengths at BS, the queuing lengths at MS are relatively longer than those in SW. However, the increase scale in MS is quite smaller (only tens of packets), as it is compared to the reduction scale (hundreds of packets) in BS. Thus, as shown in Figure 12, the reduction of the total queuing length (summing up the queuing lengths in BS and MS) achieved by DW still exhibits a significant improvement.
5 Conclusions
In this paper, we have presented an inter-frame de-jittering (IFDJ) scheme for real-time video streams over the last hop of a mobile communication network. The proposed IFDJ consists of two waiting models: static waiting (SW) and dynamic waiting (DW). SW restores the original frame gap at BS by gathering all the payloads belonging to the same video frame. Alternatively, DW can adaptively adjust the waiting time of a video frame through the queuing lengths feedback from MS.To evaluate the performance of IFDJ, we perform NS-2 simulation. Simulation results have revealed that IFDJ can minimize video frame jitter (VFJ) more effectively as it is compared to a round-robin scheme. Additionally, through DW, the total queuing length can be significantly reduced, while the total packet loss rate is substantially improved.
Acknowledgement
The authors would like to thank Miss Yi-Ying Ke for her cautious redrawing of the figures which helped improve the quality of the paper. This study is supported under the two grant numbers: (1) NSC102-2221-E-110-036-MY2 of MoST, Taiwan, and (2) 103-EC-17-A-03-S1-214 of MoEA, Taiwan. References
[1] “Special Issue on the H.264/AVC Video Coding Standard, IEEE Transaction on Circuits and System for Video Technology, vol. 13, no. 7, July 2003.
[2] A. Seong-woo, W. Hano, and H. Daesik, “Throughput-Delay Tradeoff of Proportional Fair Scheduling in OFDMA Systems,” IEEE Transactions on Vehicular Technology, vol. 60, no. 9, pp. 4620-4626, Nov. 2011.
[3] A. Nusairat and L. Xiang-Yang, “WiMAX/OFDMA Burst Scheduling Algorithm to Maximize Scheduled Data,” IEEE Transactions on Mobile Computing, vol. 11, no. 11, pp. 1692-1705, Nov. 2012.
[4] M. Abdennebi and Y. Ghamri-Doudane, “Long - Term Radio Resource Reservation in IEEE 802.16 rtPS for Video Traffic,” 2011 Global Information Infrastructure Symposium,
Da-Nang, Vietnam, Aug. 4-6, 2011.
[5] A.X. Han and I T. Lu, “Equalization of Packet Delays in OFDMA Scheduling of Real-Time Video Calls,” MILCOM, Baltimore, Maryland, USA, Nov. 7-10, 2011.
[6] J. Chen and J.Y. Wu, “A Downlink Delay-Minimized Scheduling Scheme for OFDMA WiMAX Systems,” MDM, Taipei, Taiwan, May 18-20, 2009.
[7] I Hwang, C. Huang, and B. Hwang, “Frame-Based Adaptive Uplink Scheduling Algorithm in OFDMA-Based WiMAX Networks,” ISPAN, Kaohsiung, Taiwan, Dec. 14-16, 2009.
[8] Q. Wang and G. Liu, “A Priority-based EDF Scheduling Algorithm for H.264 Video Transmission over WiMAX Network,” ICME, Barcelona, Spain, July 11-15, 2011.
[9] A. Esmailpour and N. Nasser, “A Novel Scheme for Packet Scheduling and Bandwidth Allocation in WiMAX Networks,” ICC, Kyoto, Japan, June 5-9, 2011.
[10] E. Haghani, S. Parekh, D. Calin, E. Kim, and N. Ansari, “A Quality-Driven Cross-Layer Solution for MPEG Video Streaming Over WiMAX Networks,” IEEE Transactions on Multimedia, vol. 11, no. 6, pp. 1140-1147, Oct. 2009.
[11] J.-M. Liang, J.-J. Chen, Y.-C. Wang, and Y.-C. Tseng, “A Cross-Layer Framework for Overhead Reduction, Traffic Scheduling, and Burst Allocation in IEEE 802.16 OFDMA Networks,” IEEE Transactions on Vehicular Technology, vol. 60, no. 4, pp. 1740-1755, May. 2011.
[12] J. I. Khan and R. Y. Zaghal, “Jitter and Delay Reduction for Time Sensitive Elastic Traffic for TCP-interactive Based World Wide Video Streaming over ABone,” ICCCN 2003, Dallas, Texas, USA, Oct. 20-22, 2003.
[13] C. Huang, C. Lin, and C. Chuang, “A Multilayered Audiovisual Streaming System Using the Network Bandwidth Adaptation and Two-Phase Synchronization,” IEEE Transactions on Multimedia, vol. 11, no. 5, pp. 797-809, Aug. 2009.