Top PDF Compressive sensing for sparse approximations: constructions, algorithms, and analysis

Compressive sensing for sparse approximations: constructions, algorithms, and analysis

Compressive sensing for sparse approximations: constructions, algorithms, and analysis

Chapter 1 Introduction 1.1 Motivations Compressive sensing, also referred to as compressed sensing or compressive sampling, is an emerging area in signal processing and information theory which has attracted a lot of attention recently [CT06] [Don06b]. The motivation behind compressive sensing is to do “sampling” and “compression” at the same time. In conventional wisdom, in order to fully recover a signal, one has to sample the signal at a sampling rate equal or greater to the Nyquist sampling rate. However, in many applications such as imaging, sensor networks, astronomy, high-speed analog-to-digital compression and biological systems, the signals we are interested in are often “sparse” over a certain basis. For example, an image of a million pixels has a million degrees of freedom, however, a typical interesting image is very sparse or compressible over the wavelet basis, namely, very likely only a small fraction of wavelet coefficients, say, one hundred thousand out of a million wavelet coefficients, are significant in recovering the original images, while the rest of wavelet coefficients are “thrown away” in many compression algorithms. This process of “sampling at full rate” and then “throwing away in compression” can prove to be wasteful of sensing and sampling resources, especially in application scenarios where such resources as sensors, energy, and observation time etc. are limited.
Show more

248 Read more

Performance Analysis Of Compressive Sensing Algorithms For Image Processing

Performance Analysis Of Compressive Sensing Algorithms For Image Processing

Instead of thinking in the traditional way, compressive sensing promises to recover the high-dimensional signals exactly or accurately, by using a much smaller number of non-adaptive linear samplings or measurements. In general, signals in this context are represented by vectors from linear spaces, many of which in the applications will represent images or other objects. The fundamental theorem of linear algebra, “as many equations as unknowns,” tells us that it is not possible to reconstruct a unique signal from an incomplete set of linear measurements. However, as discussed earlier, many signals such as real-world images or audio signals are often sparse or compressible over some basis, such as smooth signals or signals whose variations are bounded. This opens the room for recovering these signals accurately or even exactly from incomplete linear measurements.
Show more

8 Read more

Robust compressive sensing of sparse signals: A review

Robust compressive sensing of sparse signals: A review

each iteration. Thus, if fast implementations are avail- able for such functions the computational complexity of the algorithms can be largely reduced. On the other hand, the coordinate descent methods are not compu- tationally efficient because only one coordinate is esti- mated at each iteration and an explicit representation of the matrix Φ is needed. However, these methods of- fer scalability when the sensing matrix is very large and can only be accessed one row per iteration. Also, fast methods have been proposed where only those co- ordinates with larger influence in the residuals are es- timated at each iteration [83]. The ℓ 1 -OMP method is not computationally efficient for high dimensional sig- nals because it needs an explicit representation of the matrix Φ in order to perform the ℓ 1 correlation with every column of the sensing matrix at each iteration of the algorithm. Recall that computing an ℓ 1 corre- lation between two vectors involves solving an scalar regression problem.
Show more

17 Read more

Robust compressive sensing of sparse signals: a review

Robust compressive sensing of sparse signals: a review

The rest of the methods are based either on noncon- vex cost functions or nonconvex constraint sets, thus only convergence to a local minimum can be guaranteed. Also note that all methods, except the coordinate descent methods, 1 -CD and L-CD, and the 1 -OMP method, do not need to explicitly form the sensing matrix but only need functions that implement the matrix-vector multiplication by and T at each iteration. Thus, if fast implementations are available for such functions, the computational complexity of the algorithms can be largely reduced. On the other hand, the coordinate descent meth- ods are not computationally efficient because only one coordinate is estimated at each iteration and an explicit representation of the matrix is needed. However, these methods offer scalability when the sensing matrix is very large and can only be accessed one row per iteration.
Show more

17 Read more

Bayesian Compressive Sensing for Cluster Structured Sparse Signals

Bayesian Compressive Sensing for Cluster Structured Sparse Signals

matrix Φ is generated as described at the beginning of this section. In the simulation, vary sparsity S from 1 to M with step 1, and then for each spar- sity level, randomly generate 100 trials of cluster structured sparse signals with length N and blocks K = 2 (or K = 4). After that, the CS measure- ments are captured (noise free) through projecting the randomly generated sparse signal θ on sensing matrix Φ. BP, Block-CoSaMP, CoSaMP, BC- S and CluSS are respectively exploited to carry out the CS reconstruction, where the required parameters for some of the algorithms are optimally set to ˆ S = S, ˆ K = K and the size of clusters ˆ J = ⌊S/K⌋. The successful recon- struction is determined by the relative error between the true signal and its estimation, saying success if e < 10 −2 and fail for else. At last, the successful rate can be calculated through the ratio of total number of success events over the total number of trials, and the results are depicted in Fig. 7.
Show more

31 Read more

One-Bit Compressive Sensing of Dictionary-Sparse Signals

One-Bit Compressive Sensing of Dictionary-Sparse Signals

arXiv:1606.07531v1 [cs.IT] 24 Jun 2016 R. Baraniuk, S. Foucart, D. Needell, Y. Plan, and M. Wootters Abstract One-bit compressive sensing has extended the scope of sparse recovery by showing that sparse signals can be accurately reconstructed even when their linear measurements are subject to the extreme quantization scenario of binary samples—only the sign of each linear measure- ment is maintained. Existing results in one-bit compressive sensing rely on the assumption that the signals of interest are sparse in some fixed orthonormal basis. However, in most practi- cal applications, signals are sparse with respect to an overcomplete dictionary, rather than a basis. There has already been a surge of activity to obtain recovery guarantees under such a generalized sparsity model in the classical compressive sensing setting. Here, we extend the one-bit framework to this important model, providing a unified theory of one-bit compressive sensing under dictionary sparsity. Specifically, we analyze several different algorithms—based on convex programming and on hard thresholding—and show that, under natural assumptions on the sensing matrix (satisfied by Gaussian matrices), these algorithms can efficiently recover analysis-dictionary-sparse signals in the one-bit model.
Show more

25 Read more

Compressive Sensing and Combinatorial algorithms for image compression

Compressive Sensing and Combinatorial algorithms for image compression

59 6.1. Experimental results for solution A As was exposed previously, CS does not provide any advantage when the signal is not sparse. Then, it is necessary to establish a threshold to decide when CS is used or not. The following table shows which are the resulting bit-rates when taking different thresholds to transmit 8 and 5 bp respectively using solution A. Two different block sizes for the sub-blobks created when the wavelet transform is applied will be considered (16x16 and 32x32), hence, it is necessary to obtain the optimal thresholds for both sizes. Note that there is no difference in the PSNR obtained when changing the threshold. This is because the reconstruction using CS must be perfect as discussed in sections 4 and 5, and no error is possible when CS is not used, since in that case, no compression is applied. Therefore, the lowest bit rate corresponds to the optimal threshold for the given image
Show more

155 Read more

RZA NLMF algorithm based adaptive sparse sensing for realizing compressive sensing

RZA NLMF algorithm based adaptive sparse sensing for realizing compressive sensing

reconstruction algorithms have been proposed to find the suboptimal sparse solution. It is well known that the CS provides a robust frame- work that can reduce the number of measurements re- quired to estimate a sparse signal. Many nonlinear sparse sensing (NSS) algorithms and their variants have been proposed to deal with CS problems. They mainly fall into two basic categories: convex relaxation (basis pursuit de-noise (BPDN) [6]) and greedy pursuit (or- thogonal matching pursuit (OMP) [7]). The above NSS- based CS methods have either high complexity or low performance, especially in the case of low signal-to- noise ratio (SNR) regime. Indeed, it was very hard to adapt trade-off between high complexity and good performance.
Show more

10 Read more

A Compressive Sensing Based Approach to Sparse Wideband Array Design

A Compressive Sensing Based Approach to Sparse Wideband Array Design

Department of Electronic and Electrical Engineering University of Sheffield, UK {elp10mbh, w.liu}@sheffield.ac.uk Abstract—Sparse wideband sensor array design for sensor location optimisation is highly nonlinear and it is traditionally solved by genetic algorithms, simulated annealing or other similar optimization methods. However, this is an extremely time- consuming process and more efficient solutions are needed. In this work, this problem is studied from the viewpoint of compressive sensing and a formulation based on a modified l 1 norm is derived.

5 Read more

Compressive Sensing for Cluster Structured Sparse Signals: Variational Bayes Approach

Compressive Sensing for Cluster Structured Sparse Signals: Variational Bayes Approach

In this paper, we focus on the cluster structured sparse signals, of which significant coefficients appear in clustered blocks. This kind of sparse pattern is often exploited in many concrete applications, such as multi-band signals, gene expression levels, source localization in sensor networks, MIMO channel equalization, magnetoencephalography [2], [6], [16]. Existing algorithms designed for cluster structured sparse signals always require lots of pre-defined information (Tab. I), such as (a) number of clusters; (b) size of each clusters; (c) positions where clusters are; (d) number of significant coefficients (Sparsity). However, these priors can never be known in real applications, and thus a nonparametric recovery algorithm for cluster structured sparse signal is appealed in practical problems.
Show more

18 Read more

Reconstruction for block-based compressive sensing of image with reweighted double sparse constraint

Reconstruction for block-based compressive sensing of image with reweighted double sparse constraint

For the prior knowledge has a crucial influence on the performance of the image reconstruction algorithm, design- ing an effective regularization term is beneficial to make full use of image prior information and further improve the quality of the reconstructed image. The sparsity and nonlo- cal similarity, which are the most important properties of the images, are utilized to improve the quality of recon- structed images. The sparsity aims to represent the original image with a little nonzero value or approximate zero, more specifically, the sparsity is to organize the original image more sparsely in some domain [4, 5]. Currently, different predetermined transform basis, including discrete cosine transform (DCT), discrete wavelet transform (DWT), and so on, has been used to exploit the sparsity to further derive some reconstruction algorithm, such as smooth prediction Landweber of BCS based on DCT (BCS-SPL-DCT) [6] and BCS based on DWT (BCS-SPL-DWT) [6]. Furthermore, to enrich the texture and structure in recovered images [7, 8], the multi-hypothesis (MH) prediction [9] method which ex- plores the nonlocal similarities was proposed. By sharing the similar idea [10–12] where nonlocal similarities are exploited to design the local sparsifying transform, these methods can achieve better recovery performance than the algorithms that were previously designed for BCS without using nonlocal similarities. However, the recovered images still contain some visual artifacts.
Show more

14 Read more

Direct Application of Excitation Matrix as Sparse Transform for Analysis of Wide Angle EM Scattering Problems by Compressive Sensing

Direct Application of Excitation Matrix as Sparse Transform for Analysis of Wide Angle EM Scattering Problems by Compressive Sensing

Abstract—When compressive sensing (CS) was employed to solve electromagnetic scattering problems over wide incident angles, the selection of sparse transform strongly affects the efficiency of the CS algorithm. Different sparse transforms will require different numbers of measurement. Thus, constructing a highly efficient sparse transform is the most important work for the CS-based electromagnetic scattering computing. Based on the linear relation between current and excitation vectors over wide incident angles, we adopt the excitation matrix as sparse transform directly to obtain a suitable sparse representation of the induced currents. The feasibility and basic principle of the algorithm are elaborated in detail, and the performance of the proposed sparse transform is validated in numerical results.
Show more

7 Read more

Analysis of Music Signal Compression with Compressive Sensing

Analysis of Music Signal Compression with Compressive Sensing

This paper is to bridge the gap between the sparse representation and its effective utilization. Lots of work has been done for obtaining the sparse representation as it has due advantage the large amount of data turns to be non zero. This reduces the burden on the transmitting end and also the storage requirement. With the single-F0 estimation the presence of harmonic is detected .This can be done with help Fourier Frequency Transform (FFT) or Discrete Fourier Transform (DFT) for discrete data. The single-F0 estimation algorithms have developed. Its applications towards music signals are somehow limited because most music signals contain several concurrent harmonic. Processing of music can also be done in a symbolic framework; most commonly applied is the musical instrument digital interface (MIDI) as the input format. This kind of format exhibits several advantages over audio, since it is based on a considerably reduced amount of data, while incorporating much higher- level information in the form of note events and orchestration.
Show more

5 Read more

Sparse Array Antenna Signal Reconstruction using Compressive Sensing for Direction of Arrival Estimation

Sparse Array Antenna Signal Reconstruction using Compressive Sensing for Direction of Arrival Estimation

Figure 26: Example of a correct reconstruction when maximal off grid targets having random varying strengths between 0 and -10dB. In this case the targets strengths are FLTR: -0.5dB, -7.9dB and 0dB and λ=10 -10 . 4.1.5 Analysis and discussion: recovery without noise For on grid single target cases, CS with the MFOCUSS algorithm is perfectly capable to estimate the DOA of a target correctly. When the number of targets K is increased the probability of correct DOA estimation decreases since the received signal becomes less sparse. The theoretical limit of K log(N) working elements needed to identify K incoming targets was verified with simulations for the noiseless case with equal strength targets. For unequal strength targets the number of wrong reconstructions for K=4 targets doubled, and also for K=3 an additional 0.5% of the reconstructions failed. Hence it can be said that targets with different strengths have a degrading effect on the performance of CS.
Show more

89 Read more

Comparative Study on Sparse and Recovery Algorithms for Antenna Measurement by Compressed Sensing

Comparative Study on Sparse and Recovery Algorithms for Antenna Measurement by Compressed Sensing

To address the shortcomings of traditional sampling theory and signal processing methods, this study explores the recovery performance of antenna far-field signals by using different sparsity bases and reconstruction algorithms. Multiple experiments are conducted on various types of antennas. The results show that the far-field signal can be recovered better under the Discrete Wavelet Transform (DWT) and Compressive Sampling Matching Pursuit (COSAMP) reconstruction algorithms, which improve the measurement efficiency and possess greater advantages and more development potential than traditional antenna measurements.
Show more

10 Read more

Performance of compressive sensing 
		algorithms over time varying frequency selective channel

Performance of compressive sensing algorithms over time varying frequency selective channel

Mobility environment leads to time varying frequency selective channel. Orthogonal Frequency Division Multiplexing (OFDM) be combined with Multiple Input Multiple Output (MIMO) system to increases the system capacity on time varying channel. Time varying frequency selective MIMO channel estimation demands huge number of training signals since the system has huge number of channel coefficients. In practical, most of the channels are composed of a few dominant taps and large part of taps is zero or approximately zero. They are often called sparse multi-path channels. By exploiting the coherent sparsity of the multipath fading channels, Compressive Sensing (CS) based channel estimation method provides better estimation of sparse channel than the conventional estimation methods which are suitable for rich channels and also greatly decrease the pilot overhead burden. This paper evaluates the performance of CS based channel estimation methods for MIMO-OFDM systems over time varying frequency selective channel.
Show more

6 Read more

Dimension reduction algorithms for near-optimal low-dimensional embeddings and compressive sensing

Dimension reduction algorithms for near-optimal low-dimensional embeddings and compressive sensing

Secondly, motivated by compressive sensing of images, we examine linear embeddings of datasets containing points that are sparse in the pixel basis, with the goal o[r]

42 Read more

Nonlocal tensor sparse representation and low-rank regularization for hyperspectral image compressive sensing reconstruction

Nonlocal tensor sparse representation and low-rank regularization for hyperspectral image compressive sensing reconstruction

4.6. Convergence Analysis Lastly, we have conducted experiments to show the convergence of our method using the Toy and Indian Pines dataset as examples under different sampling rates and different initializations. Figure 14 plots the PSNRs versus iteration numbers for the tested HSIs when the sampling rates are at 0.10 for Toy and 0.15 for Indian Pines, when using initialization x = Φ ∗ y and DCT. As can be seen, the different initialization ways can provide quite close solutions, which indicates the performance of proposed algorithm is not sensitive to initialization. However, the two initialization ways possess different rates of convergence, and, by contrast, the initialization via DCT requires only a small number of iterations to get to the final PSNR. Therefore, we adopted the initialization strategy based on DCT to speed up our algorithm. Besides, the value of PSNR will become a constant when the algorithm converges.
Show more

24 Read more

Hybridizing sparse component analysis with genetic algorithms for microarray analysis

Hybridizing sparse component analysis with genetic algorithms for microarray analysis

GAs are stochastic global search and optimization methods inspired by natural biological evolution. The core of a GA is a population of potential solutions, named individuals, to a given optimization problem as well as a set of oper- ators borrowed from natural genetics. At each generation of a GA, a new set of approximations is created by the process of selecting individuals according to their level of fitness in the problem domain. Their reproduction is guided by the genetically motivated operators. This process leads to the evolution of individuals within the population which better solve the optimization prob- lem than the individuals from which they were created. Finally, this process should lead to an optimal solution of the optimization problem even if many suboptimal solutions exist, i.e. if the target function to be optimized has many local minima.
Show more

48 Read more

Compressive sensing in dynamic scenes

Compressive sensing in dynamic scenes

Ji et al. [20] consider the recovering of the original signal from compressive measurements from the Bayesian perspective. Whereas the other algorithms that were discussed here all provide a point-estimate of the sparse state x, Bayesian Compressive Sensing provides an estimated posterior density. With this, instead of providing only a point estimate of the reconstructed signal, BCS can also provide what they refer to as “error bars”. As discussed in subsection 5.5.1 , one of the challenges in combining CS with a PF is that CS only provides a point-estimate, while a PF requires an initial distribution. Therefore, some kind of transition has to be included. With Bayesian CS however, its output could be used as the initial distribution for the PF directly. This would make the interplay between CS and a PF more natural.
Show more

73 Read more

Show all 10000 documents...