Top PDF On source coding for networks

On source coding for networks

On source coding for networks

Consider now the specific example of an environmental remote sensing network with sev- eral sensors, each of which takes measurements and transmits them to a central base station, which also makes its own measurements. In encoding its transmission to the base station, each sensor can consider the measurements taken by the base station as side information available to the base station’s decoder. If the system uses multi-hop transmissions, then measurements relayed by a sensor act as side information available both to that sensor’s encoder and the base station’s decoder. Motivated by this framework, I begin the second part of the thesis in Chapter 4 by deriving rate-distortion results for two systems using side information. First, for the system shown in Figure 1.3(a) with some side information known at both the encoder and the decoder, and some known only at the decoder, I derive the rate-distortion function and evaluate it for binary symmetric and Gaussian sources. I then apply the results for the binary source to a second network, shown in Figure 1.3(b), which models point-to-point communication when the presence of side information at the decoder is unreliable [2, 3]. I demonstrate how to evaluate the binary rate-distortion function for that network, closing the gap between previous bounds [2, 28] on its value. The form of the binary rate-distortion function for this second system exhibits an interesting behavior akin to successive refinement, but with side information available to the refining decoder. This work also appears in [29, 30]
Show more

184 Read more

On Achievable Rate Regions for Source Coding Over Networks

On Achievable Rate Regions for Source Coding Over Networks

for the two-terminal network where the two source sequences are separately encoded and the decoder combines the two encoded messages to losslessly reproduce both of the two source sequences [2]. Gray and Wyner found an exact single-letter charac- terization for both lossless and lossy rate regions on a related “simple network” [3]. Ahlswede and K¨orner derived a single-letter characterization for the two-terminal net- work where the decoder needs to reconstruct only one source sequence losslessly [4]; that characterization employs an auxiliary random variable to capture the decoder’s incomplete knowledge of the source that is not required to reconstruct. Wyner and Ziv derived a single-letter characterization of the optimal achievable rate for lossy source coding in the point-to-point network when side information is available only at the decoder [5]. Berger et al. derived an achievable region (inner bound) for the lossy two-terminal source coding problem in [6]. That region is known to be tight in some special cases [7]. Heegard and Berger found a single-letter characterization by using two auxiliary random variables for the network where side information may be absent [8]. Yamamoto considered a cascaded communication system with multi-hop and multi-branches [9]. For larger networks, Ahlswede et al. derived an optimal rate region for any network source coding problem where there is one source node that observes a collection of independent source random variables, all of which must be reconstructed losslessly by a family of sink nodes [10]; Ho et al. proved the cut-set bound is tight for multi-cast network with arbitrary dependency on the source random variables [11]; Bakshi and Effros generalized Ho’s result to show the cut-set bound is still tight when side information random variables are available only at the end nodes [12].
Show more

154 Read more

Network source coding: theory and code design for broadcast and multiple access networks

Network source coding: theory and code design for broadcast and multiple access networks

where several transmitters cooperation among the transmitters is not In this chapter, I present the properties of instantaneous and uniquely decodable multiple access source codes and ge[r]

151 Read more

An Overview on Wavelets in Source Coding, Communications, and Networks

An Overview on Wavelets in Source Coding, Communications, and Networks

In this special issue, in this category, “Joint source- channel coding for wavelet-based scalable video transmission using an adaptive turbo code” by N. Ramzan et al. proposes a scalable, wavelet-based video coder which is jointly opti- mized with a turbo encoder providing UEP for the subbands. The end-to-end distortion taking into account channel rate, turbo-code packet size, as well as the interleaver is minimized at given channel conditions by an iterative procedure. Also in this special issue is “Content-adaptive packetization and streaming of wavelet video over IP networks” by C.-P. Ho and C.-J. Tsai which proposes a 3D video wavelet codec fol- lowed by forward error correction (FEC) for UEP with the focus being on the content-adaptive packetization for video streaming over IP networks. At a given packet-loss rate, the video distortion resulting from packet loss is translated into source distortion, thus yielding the best FEC protection level. The run-time packet-loss rate that is fed back from the re- ceiver also enters into the optimization algorithm for choos- ing the FEC protection level. Finally, a similar approach in a different context is presented in “Energy-efficient transmis- sion of wavelet-based images in wireless sensor networks” by V. Lecuire et al. in this special issue. In this work, image quality and energy consumption are jointly optimized over a wireless sensor network. The image encoder, which uses a 2D DWT, is adapted according to the state of the network (global energy dissipated in all the nodes between the trans-
Show more

27 Read more

Source Coding Optimization for Distributed Average Consensus.

Source Coding Optimization for Distributed Average Consensus.

In practice, for large networks, (i.e., m ą 20 and T ě 6) the memory and time requirements of the optimization (3.38) seem to grow very quickly. Furthermore, using CVX [73, 74], all compatible solvers tested displayed poor stability for large networks. Attempts to use alternative modeling frameworks, such as GPkit [75], CVXPY [76], and YALMIP [77], failed due to the high memory requirements of the model. Manually building a GGP model is memory- and time-intensive; if explicit model representation can be avoided, however, it may be possible to apply other convex optimization methods without these scaling issues. In particular, avoiding the explicit representation of posynomial coefficients and exponents would probably greatly improve performance for large problem sizes m and T. To provide a program that is more easily solved in practice, we make two simplifications. First, we constrain the distortions to be equal at each node, which is well motivated for the end goal of designing a truly distributed protocol. Next, the program can be cast as
Show more

98 Read more

Joint Source-Channel Coding for Image Transmission over Underlay Multichannel Cognitive Radio Networks

Joint Source-Channel Coding for Image Transmission over Underlay Multichannel Cognitive Radio Networks

Chande and Farvardin in [18] devised a JSCC scheme which allowed transmissions of sources compressed by embedded source coders, such as SPIHT, over a memoryless noisy channel. They used rate-compatible punctured convolutional codes for channel coding, while also devised mathematical expressions for the expected distortion, ex- pected PSNR, and the average useful source coding rate. These quantities were then optimized through solving dynamic problems subject to a certain rate constraint. Figure 3.3 shows the inverse code rate profile for a transmission of five packets. The profile decreases as the packet indices increase. The labels 1 through 5 indicate the order to transmit bits within packets. With this, they achieved optimal progressive transmission at all transmission rates [18].
Show more

72 Read more

Resistance Distortion Direction for the Passage of Video in the Context of Multihop Wireless Networks

Resistance Distortion Direction for the Passage of Video in the Context of Multihop Wireless Networks

to produce estimates of the overall video distortion that can be used for switching between inter- and intracoding modes mper macroblock, achieving higher PSNR. In [9], an enhancement to the transmission robustness of the coded bitstream is achieved through the introduction of inter/intracoding with redundant macroblocks. The coding parameters are determined by a rate-distortion optimization scheme. These schemes are evaluated using simulation where the effect of the network transmission is represented by a constant packet-loss rate, and therefore fails to capture the idiosyncrasies of real-world systems .
Show more

7 Read more

Cascading polar coding and LT coding for radar and sonar networks

Cascading polar coding and LT coding for radar and sonar networks

and the LT code as outer code. As suggested in [27], premised on a novel conception of channel polarization, the inner polar code (PC) can divide the whole channel into multiple sub-channels via recursively channel comb- ing and splitting and then use some good channels to transmit useful information. It is noted that, however, a single polar code is sensitive to impulsive noise which could cause the error spreading in the time domain, due to the energy spreading of impulse noise after the DFT oper- ation [18]. In our cascaded coding scheme, other than fed directly to the PC encoder, the low-complexity LT code is used to perform parity check as a first-step outer decoder. Although the LT code has limited error-correcting capa- bility, a slight BER decrease in the input sequence will greatly enhance the decoding performance of PC. This is not surprise, as the PC decoder essentially utilizes the successive interference cancelation scheme and the initial result will have remarkable influence on the subsequent decoding process. By further integrating a matrix inter- leave operation in the inner PC encoder, the cascaded coding scheme can acquire significantly improved BER performance by imposing some slight encoding/decoding complexity, which in fact provides a great compromise between the error-correcting capacity and implementa- tion complexity.
Show more

12 Read more

Distributed transform coding via source-splitting

Distributed transform coding via source-splitting

For block WZ-code designs, the transforms and the bit allocations are found by RD-WZQ model (Section 3.1.1) and WZ quantizers are implemented using CEC- TCQ followed by binary SW coding. More specifically, the rate found by the bit allocation algorithm for each transform coefficient is used as the conditional entropy constraint in the design of a CEC-TCQ for that coeffi- cient. As described in Section 2.3, the CEC-TCQ designs are based on scalar-side information obtained by a linear transform of the vector side-information at the decoder, see Theorem 3. All CEC-TCQ designs are based on the 8-state trellis used in JPEG2000 [[26], Figure 3.16]. For trellis encoding and decoding, a sequence length of 256 source samples has been used. For design and testing quantizers, sample sequences of length 5 × 10 5 have been used. Since, the main focus this paper is the design of transforms and the quantizers, we assume ideal SW coding of the binary output of each CEC-TCQ, so that our results do not depend on any particular SW coding method. In a practical implementation (e.g., [20]), near optimal performance can be obtained by employing a sufficiently long SW code (note that sequence length for SW-coding can be chosen arbitrary larger than the sequence length used for TCQ encoding). This type of coding is well suited for applications such as distributed
Show more

15 Read more

Data Bug on Open Source for Predictive Articulates

Data Bug on Open Source for Predictive Articulates

The concept of recognition and discovery abnormality in embedded systems like sensor networks is very crucial work [17]. It concentrate on how to asset flaw discovery afterwards the system had been deployed. Moreover node- level correcting tools can grant descriptive code information inside the node but decline to recognize when and where an issue happens in network. Then the network level diagnosis tools can powerfully recognize an issue from the network but decline to narrow down the issue within the node because they lack descriptive code information. Functional issues must be detected and discovered in time since their incident at some nodes commonly suppress their normal task. The design overview through client node can problem a demand to notify a subnet of nodes to conversion into profiling mode.
Show more

16 Read more

LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

We define two di ff erent scheduling schemes for the case of channel coding. The first one (decoding algorithm I) has already been defined in [29]. It first proceeds with the decod- ing of the inner code, and the last step is to reduce the resid- ual errors by decoding the outer code. The second schedule (decoding algorithm II) iterates between the inner and outer code in each iteration. For notation purposes, we will assume that nodes are activated serially by order of appearance in the scheme definition, except when included in brackets (which means that activation for those nodes is performed in paral- lel within a clock cycle). 2 Then, the two different schedules
Show more

12 Read more

ARK: Aggregation of Reads by K-Means for Estimation of Bacterial Community Composition.

ARK: Aggregation of Reads by K-Means for Estimation of Bacterial Community Composition.

subset, and finally fuses these estimates into a composition estimate jointly for all the reads. To segregate the reads into subsets, we choose to employ the K-means clustering algorithm [20]. Since the K-means clustering algorithm is simple and computationally inexpensive for a rea- sonable number Q of clusters (subsets), it can be used to partition even fairly large sets of reads into more (intra) homogeneous subsets. By its very algorithmic nature, K-means clustering partitions the feature space into Q non-overlapping regions and provides a set of correspond- ing mean vectors. This is called codebook generation in vector quantization [15], originally from signal processing, coding and clustering. Our new method is termed as Aggregation of Reads by K-means (ARK). From the statistical perspective, theoretical justification of ARK stems from a modeling framework with a mixture of densities.
Show more

16 Read more

Joint Source-Channel Coding Optimized On End-to-End Distortion for Multimedia Source

Joint Source-Channel Coding Optimized On End-to-End Distortion for Multimedia Source

In [4], Kwasinski et al. studied the delay-constrained real-time transmission of a mem- oryless source, where a feedback-based incremental redundancy error control scheme (Hy- brid ARQ) is used. Unlike retransmission based schemes, the proposed system is designed to work with a constant bit-rate. Furthermore, the authors introduced the notion of code- types that is used to model the transmission mechanism with a Markov chain. Using this model, a novel algorithm was developed to address the optimal source and channel rate allocation. In this study, it is showed that the proposed system captures the benefits of feedback, while it allows the synchronous transmission similar to pure FEC systems. Fur- thermore, the result suggested that the proposed mechanism obtains higher channel SNR gains over a pure FEC based scheme. In the case of a CDMA network, the higher SNR gain can be translated into an increase in the number of users that can be simultaneously supported at the same level of distortion.
Show more

107 Read more

Reversible watermarking with denoising using source coding technique

Reversible watermarking with denoising using source coding technique

Set Partitioning in hierarchical trees (SPIHT) is used as the source coding scheme [11]. This is an embedded compression method used for truncating output. Discrete wavelet transform bits are used. And sorting among DWT bits is then performed. Hence DWT bits with high magnitude are sent earlier. The sorting pass is also available for the receiver. SPIHT uses self similarities across the sub bands of wavelet transforms. Sophisticated sorting method with least required bit budget is possible from self similarities on spatial orientation trees from root downwards to leaves. Compression quality at ‘n s’ determines the constant
Show more

5 Read more

Dual Z-Source Network Dual-Input Dual-Output Inverter

Dual Z-Source Network Dual-Input Dual-Output Inverter

In this paper a novel nine-switch inverter was presented with two inputs and two outputs. The proposed inverter is composed of two z-source networks and a modified nine-switch inverter. The proposed inverter was compared with single z-source network dual-input dual-output inverter (SZSN- DIDOI) which has already been presented by the authors. Furthermore, a special control strategy based on voltage gains was presented to determine the type and time intervals of the switching vectors. Also a carrier-based PWM modulation was proposed for both the proposed inverter (DZSN-DIDOI) and SZSN- DIDOI. Simulation and experimental results verified performance and validity of the proposed inverter.
Show more

9 Read more

Rate adaptive BCH codes for distributed source coding

Rate adaptive BCH codes for distributed source coding

We shall consider a system with feedback as in [9] where LDPCA coding is used, but here we shall use BCH coding in a rate-adaptive manner (RA BCH). Syndromes (blocks of syndrome bits) are requested one by one through the feedback channel, and the requests are stopped when a sufficiently reliable decoded result is reached, see Figure 1. To increase the reliability, a check of the decoded result may be requested and performed based on additional syndromes of the RA BCH code or cyclic redundancy checking (CRC). The main motivations of the study on RA BCH codes is the relatively low efficiency of LDPCA (and Turbo) codes when using a short packet length in a high-correlation scenario and the fact that the analysis of the performance of the RA BCH codes is simpler. An ini- tial study on RA BCH codes was presented in [15], where we proposed a model for RA BCH codes: we demon- strated that BCH codes were able to outperform LDPCA
Show more

14 Read more

On practical design for joint distributed source and network coding

On practical design for joint distributed source and network coding

number of unknowns and equations, whereas for multicasting correlated sources, a decoder has to decode more unknowns than the number of equations. Indeed, the decoding complexity of either the minimum entropy decoder or the maximum a posteriori probability decoder generally grows exponentially in the block length. For this reason, such a random linear coding approach is not suitable for practical implementation. As a reviewer commented, this is the case even in the original Slepian-Wolf coding setting. Practical Slepian-Wolf coding methods carefully design the encoding matrices to achieve good performance and low decoding complexity. However, as mentioned in [17], in multicasting correlated sources over a network, it is challenging to maintain a desirable structure in the codes seen by the receivers, considering the fact that the end-to-end transfer function depends on the operations done at the interior of the network, in addition to the operations at the sources.
Show more

12 Read more

Distributed joint source channel coding for relay systems exploiting source relay correlation and source memory

Distributed joint source channel coding for relay systems exploiting source relay correlation and source memory

Joint source-channel coding (JSCC) has been widely used to exploit the memory structure inherent within the source information sequence. In the majority of the approaches to JSCC design, variable-length code (VLC) is employed as source encoder and the implicit residual redundancy after source encoding is additionally used for error correction in the decoding process. Some related study can be found in [8-11]. Also, there are some liter- atures which focus on exploiting the memory structure of the source directly, e.g., some approaches of combin- ing hidden Markov Model (HMM) or Markov chain (MC) with the turbo code design framework are presented in [12-14].
Show more

13 Read more

Entropy and Information Theory   Robert M  Gray pdf

Entropy and Information Theory Robert M Gray pdf

The origins of this book lie in the tools developed by Ornstein for the proof of the isomorphism theorem rather than with the result itself. During the early 1970’s I first become interested in ergodic theory because of joint work with Lee D. Davisson on source coding theorems for stationary nonergodic processes. The ergodic decomposition theorem discussed in Ornstein [115] provided a needed missing link and led to an intense campaign on my part to learn the funda- mentals of ergodic theory and perhaps find other useful tools. This effort was greatly eased by Paul Shields’ book The Theory of Bernoulli Shifts [131] and by discussions with Paul on topics in both ergodic theory and information theory. This in turn led to a variety of other applications of ergodic theoretic techniques and results to information theory, mostly in the area of source coding theory: proving source coding theorems for sliding block codes and using process dis- tance measures to prove universal source coding theorems and to provide new characterizations of Shannon distortion-rate functions. The work was done with Dave Neuhoff, like me then an apprentice ergodic theorist, and Paul Shields.
Show more

306 Read more

ALGEBRAIC CODING THEORY IN THE QUEST FOR EFFICIENT DIGITAL INDIA

ALGEBRAIC CODING THEORY IN THE QUEST FOR EFFICIENT DIGITAL INDIA

We therefore conclude that by making the order of an extended prefix source encoder large enough, we can make the code faithfully represent the discrete memoryless source  as closely as desired. In another words, the average code-word length of an extended prefix code can be made as small as the entropy of the source provided the extended code has a high enough order, in accordance with the source-coding theorem.

9 Read more

Show all 10000 documents...