The rank minimization problem consists of finding the minimum rank matrix in a convex constraint set. Though this problem is NP-Hard even when the constraints are linear, a recent paper by Recht et al. [RFP] showed that most instances of the linearly constrained rank minimization problem could be solved in polynomial time as long as there were sufficiently many linearly independent constraints. Specifi- cally, they showed that minimizing the nuclear norm (also known as the Ky Fan 1-norm or the trace norm) of the decision variable subject to the same affine con- straints produces the lowest rank solution if the affine space is selected at random. The nuclear norm of a matrix—equal to the sum of the singular values—can be op- timized in polynomial time. This initial paper initiated a groundswell of research, and, subsequently, Cand`es and Recht showed that the nuclear norm heuristic could be used to recover low-rank matrices from a **sparse** collection of entries [CR09], Ames and Vavasis have used similar techniques to provide average case **analysis** of NP- HARD combinatorial optimization problems [AV09], and Vandenberghe and Zhang have proposed novel **algorithms** for identifying linear systems [LV08]. Moreover, fast **algorithms** for solving large-scale instances of this heuristic have been developed by many groups [CCS08, LB09, MGC08, MJCD08, RFP]. These developments provide new strategies for tackling the rank minimization problems that arise in Machine Learning [YAU07, AMP08, RS05], Control Theory [BD98, EGG93, FHB01], and di- mensionality reduction [LLR95, WS06, YELM07].

Show more
248 Read more

Recently, motivated by **compressive** **sensing** (CS) and **sparse** recovery (SR) techniques used in radar, several authors have considered CS and SR ideas for moving target indication (MTI) and STAP problems, such as **sparse**-recovery-based STAP type (SR-STAP) **algorithms** in [6–10], L1-regularized STAP filters in [11–13], etc.. The core idea in SR-STAP type **algorithms** is to regularize a linear inverse problem by including prior knowledge that the signal of interest is **sparse** [10]. The work in [6–10] shows that the SR-STAP type **algorithms** provide high-resolution of the clutter spectrum estimate and exhibit significant better performance than conventional STAP **algorithms** in very short snapshots. However, their performance and computational complexity depends on the qualities of SR **algorithms**.

Show more
18 Read more

This paper is to bridge the gap between the **sparse** representation and its effective utilization. Lots of work has been done for obtaining the **sparse** representation as it has due advantage the large amount of data turns to be non zero. This reduces the burden on the transmitting end and also the storage requirement. With the single-F0 estimation the presence of harmonic is detected .This can be done with help Fourier Frequency Transform (FFT) or Discrete Fourier Transform (DFT) for discrete data. The single-F0 estimation **algorithms** have developed. Its applications towards music signals are somehow limited because most music signals contain several concurrent harmonic. Processing of music can also be done in a symbolic framework; most commonly applied is the musical instrument digital interface (MIDI) as the input format. This kind of format exhibits several advantages over audio, since it is based on a considerably reduced amount of data, while incorporating much higher- level information in the form of note events and orchestration. However, the main limitation is that it loses some fine information available in audio signals such as frequency, amplitude modulations and spectral envelopes, which may be valuable for other tasks.

Show more
As one of the most used methods in CEM, MoM is widely applied to diﬀerent types of EM scattering problems. Many fast **algorithms** based on traditional MoM, such as fast multipole method (FMM) [4], multi-level fast multipole algorithm (MLFMA) [5], and adaptive integral method (AIM) [6], have been developed gradually. However, when these **algorithms** were used to solve EM problems over wide incident angles, they still cannot avoid the problem of repeated calculations.

In recent years, various signal sampling schemes have been developed. However, such sampling methods are difficult to implement. So before sampling the signal it should have sufficient information about the reconstruction kernel. The emerging **compressive** **sensing** theory shows that an unevenly sampled discrete signal can be perfectly reconstructed by high probability of success by using different optimization techniques and by considering fewer random projections or measurements compared to the Nyquist standard. Amart Sulong et al proposed the **compressive** **sensing** method by combining randomized measurement matrix with the wiener filter to reduce the noisy speech signal and thereby producing high signal to noise ratio [1]. Joel A. Tropp et al demonstrated the theoretical and empirical work of Orthogonal Matching Pursuit (OMP) which is effective alternative to (BP) for signal recovery from random measurements [2]. Phu Ngoc Le et al proposed an improved soft – thresholding method for DCT speech enhancement [3]. Vahid Abolghasemi focused on proper estimation of measurement matrix for **compressive** sampling of the signal [4].

Show more
Experiments demonstrate that all Gibbs sampler based meth- ods show comparable performance. The importance sampler was found to work nearly as well as the Gibbs sampler on smaller problems in terms of estimating the model parameters, however, the method performed substantially worse on estimating the **sparse** coefficients. For large problems we found that the combination of a subset selection heuristic with the Gibbs sam- pling approaches can outperform previous suggested methods. In addition, the methods studied here are flexible and allow the incorporation of additional prior knowledge, such as the non- negativity of the approximation coefficients, which was found to offer additional benefits where applicable.

Show more
14 Read more

Wireless sensor networks are usually placed in field e.g. seismic sensors, fire, temperature and humidity detectors in forest etc. These sensors are usually battery operated and cannot be easily replaced. Hence, an efficient data acquisition system is needed in order to optimize the data transferred from these sensors as well as minimize the computational complexity of these sensors in order to increase their battery life. Compressed **sensing** can very well fit into such situations as it samples the signal of interest at a very low rate than the nyquist criteria and as a result it has an effective computational capacity.

Show more
For example, when a picture is taken with a digital camera, dozens of megabytes of data are collected. However, it turns out that when this image is transformed to e.g. the wavelet domain (as is done by the well-known JPEG 1 compression method) only a relatively small number of wavelet coefficients are large; the others are approximately zero. In other words: much of the information can be captured using a small number of wavelets. Mathematically speaking, the image is approximately **sparse** in the wavelet domain. As a result, only the large coefficients have to be saved, while the quality of the image reconstructed from these coefficients is still close to the original. While this is useful, it also raises the question whether it is truly necessary to sample at a high rate if only a small amount of this data ends up being used in the final representation. With the CS framework the answer to this question is ‘no’: with CS, signals that are **sparse** in some domain can be recovered from a number of measurements that is small compared to what the Nyquist rate suggests. Instead of sampling the original signal directly, only a compressed version of it is acquired.

Show more
73 Read more

Copyright © 2010 Jianping Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. **Compressive** **sensing** (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

Show more
Further on three steps are performed on the matrix, in step 1, Floyd or Dijkstra path algorithm is applied on the values to recover **sparse** pair-wise values. In step 2, MDS algorithm is implied on the resultant matrix S`. The output from step 1 and 2 gives 3D relative coordinates of the nodes. Since this technique derive 3D coordinates for single node, the next techniques uses **compressive** **sensing** to derive the location of multiple points. The next popular technique [8] was evolved for missile launch system not for PPDR or public service schemes. The algorithm is very simple and straight forward by approaching the problem through Received Signal Strength (RSS) parameters. The RSS values are stored in a **sparse** matrix for pin-pointing the multiple location targets. The locations are then extracted from the **sparse** values through l-minimization matrix technique. Like previously discussed techniques that measured the k-**sparse** representations, instead RSS measurements in M- dimensional coordinates are measured accurately by convoluting with original received signal according to below equation

Show more
the seminal work on stochastically fully connected con- ditional random fields (SFCRF) first proposed in [29] to facilitate for this cross-domain optimization. SFCRFs are fully-connected conditional random fields with stochas- tically defined cliques. Unlike traditional conditional random fields (CRF) where nodal interactions are deter- ministic and restricted to local neighborhoods, each node in the graph representing a SFCRF is connected to every other node in the graph, with the cliques for each node is stochastically determined based on a distribution prob- ability. Therefore, the number of pairwise cliques might not be the same as the number of neighborhood pairs as in the traditional CRF models. By leveraging long-range nodal interactions in a stochastic manner, SFCRFs facil- itate for improved detail preservation while maintaining similar computational complexity as CRFs, which makes SFCRFs particularly enticing for the purpose of improved **sparse** reconstruction of **compressive** **sensing** MRI. How- ever, here the problem is to reconstruct an MRI image in the spatial domain while the available measurements are made in k-space domain. Similar to most CRF mod- els, SFCRFs cannot be leveraged directly for this purpose. Motivated by the significant potential benefits of using SFCRFs in improving reconstruction quality of compres- sive **sensing** MRI, we extend the SFCRF model into a cross-domain stochastically fully connected conditional random field (CD-SFCRF) model that incorporates cross- domain information and constraints from k-space and spatial domains to reconstruct the desirable MRI image from **sparse** observations in k-space.

Show more
12 Read more

1.4. Relationship to previous research. In [1], Ailon and Chazelle propose the idea of a randomized Fourier transform followed by a random projection as a “fast Johnson-Lindenstrauss transform” (FJLT). The transform is decomposed as QF Σ, where Q is a **sparse** matrix with non-zero entries whose locations and values are chosen at random locations. They show that this matrix QF Σ behaves like a random waveform matrix in that with extremely high probability, it will not change the norm of an arbitrary vector too much. However, this construction requires that the number of non-zero entries in each row of Q is commensurate with the number of rows m of Q. Although ideal for dimensionality reduction of small point sets, this type of subsampling does not translate well to **compressive** sampling, as it would require us to randomly combine on the order of m samples of Hx 0 from arbitrary locations to

Show more
22 Read more

The main results established were that a clear formulation of the approximation problem leads to the existence of a unique approximating algebraic form which determines the polynomial co[r]

13 Read more

Classification is one of the most fundamental data **analysis** technique and is of great impor- tance in mining and making use of the large **sparse** data. For example, a company may need to classify the profitability of a product / service based on **sparse** user feedback vectors. For the public datasets shown in Table 1.1, various types of data labels (shown in the last column) are also provided for building interesting and useful classifiers. Besides, in the literature, much previous work has demonstrated that **sparse** data are highly useful for predictive modeling. For example, Brian et al. [14] have demonstrated that classifying user web browsing data, which is high-dimensional and **sparse**, is an e ff ective solution for online display advertising. Kosinski et al. [50] have used the likes in Facebook, which are **sparse** atomic behavioral data, to accurately predict the personality trait of each person. The same type of data have also been used in De Cnudde et al. [15] for improving micro-finance credit scoring. Meanwhile, large and **sparse** fine-grained transactional (invoicing) data have been used in Junqu´e de Fortuny et al. [41] to build e ff ective linear classification models for corporate residence fraud detection. Mcmahan et al. [63] have demonstrated how extremely **sparse** data and linear classification can solve the advertisement click prediction tasks in the industry. Martens et al. [61] have used massive, **sparse** consumer payments data to build linear predictive models for targeted marketing.

Show more
135 Read more

CS differs from classical sampling in three important respects. First, CS is a mathematical theory focused on measuring finite-dimensional vectors in R N . Second, CS systems typically acquire measurements in the form of inner products between the signal and more general test functions. Thirdly, the signal recovery is typically achieved using highly nonlinear methods. In short, CS enables a potentially large reduction in the sampling and computation costs for **sensing** signals that have a **sparse** or compressible representation.

14 Read more

The CS theory [2], [3] exploits the knowledge that the signals or images acquired are **sparse** in some known transform domain, which means that the signals or images are **compressive**. Then the **compressive** signals can be reconstructed accurately at a far lower data sampling rate from a significantly smaller number of measurements than sampling original signals at Nyquist/Shannon rate [4]. Therefore, the CS theory can lead to the reduction of sampling rate, storage volume, power consumption, and computational complexity in signal and image processing and related research fields.

Show more
that promises to effectively recover a **sparse** signal from far fewer measurements than its dimension. The **compressive** sampling theory assures almost an exact recovery of a **sparse** signal if the signal is sensed randomly where the number of the measurements taken is proportional to the sparsity level and a log factor of the signal dimension. Encouraged by this emerging technique, this paper briefly reviews the application of **Compressive** sampling in speech processing. It comprises the basic study of two necessary condition of **compressive** **sensing** theory: sparsity and incoherence. In this paper, various sparsity domain and **sensing** matrix for speech signal and different pairs that satisfy incoherence condition has been compiled.

Show more
is fulfilled, where R it j is the residual of the current iterate U it j in the solver. Since the problems in (13) completely decouple, we can easily parallelize their solution process by a parfor loop in Matlab . In case an individual problem becomes very ex- pensive, we further implemented a distributed memory parallelization for the ten- sor product AMG based on Matlab ’s distributed function. Thereby, we overcome the limitation of a non-existing multi-core parallelization for **sparse** matrix-vector products in Matlab .

20 Read more

We have proposed an estimation scheme for gradient in high dimensions that combines ideas from Spall’s SPSA with **compressive** **sensing** and thereby tries to economize on the number of function evaluations. This has theoretical justification by the results of (Austin, 2016). Our method can be extremely useful when the function evaluation is very expensive, e.g., when a single evaluation is the output of a long simulation. This situation does not seem to have been addressed much in literature. In very high dimensional problems with **sparse** gradient, computing estimates for partial derivatives in every direction is inefficient because of the large number of function evaluations needed. SP simplifies the problem of repeated function evaluation by concentrating on a single random direction at each step. When the gradient vectors in such cases live in a lower dimensional subspace, it also makes sense to exploit ideas from **compressive** **sensing**. We have computed the error bound in this case and have also shown theoretically that this kind of estimation of gradient works well with high probability for the gradient descent problems and in other high dimensional problems such as estimating EGOP in manifold learning where gradients are actually low-dimensional and gradient estimation is relevant. Simulations show that our method works much better than pure SP.

Show more
27 Read more