• No results found

A new bound on the block restricted isometry constant in compressed sensing

N/A
N/A
Protected

Academic year: 2020

Share "A new bound on the block restricted isometry constant in compressed sensing"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

R E S E A R C H

Open Access

A new bound on the block restricted

isometry constant in compressed sensing

Yi Gao

1*

and Mingde Ma

2

*Correspondence:

gaoyimh@163.com 1School of Mathematics and

Information Science, Beifang University of Nationalities, Wenchang Road, Yinchuan, 750021, China

Full list of author information is available at the end of the article

Abstract

This paper focuses on the sufficient condition of block sparse recovery with the l2/l1-minimization. We show that if the measurement matrix satisfies the block

restricted isometry property with

δ

2s|I< 0.6246, then every blocks-sparse signal can

be exactly recovered via thel2/l1-minimization approach in the noiseless case and is

stably recovered in the noisy measurement case. The result improves the bound on the block restricted isometry constant

δ

2s|Iof Lin and Li (Acta Math. Sin. Engl. Ser. 29(7):1401-1412, 2013).

Keywords: compressed sensing;l2/l1-minimization; block sparse recovery; block

restricted isometry property; null space property

1 Introduction

Compressed sensing [–] is a scheme which shows that some signals can be recon-structed from fewer measurements compared to the classical Nyquist-Shannon sampling method. This effective sampling method has a number of potential applications in signal processing, as well as other areas of science and technology. Its essential model is

min

x∈RNxs.t. y=Ax, ()

wherex denotes the number of non-zero entries of the vectorx, ans-sparse vector x∈RN is defined byx

≤sN. However, thel-minimization () is a nonconvex and

NP-hard optimization problem [] and thus is computationally infeasible. To overcome this problem, one proposed thel-minimization [, –].

min

x∈RNxs.t. y=Ax, ()

wherex=

N

i=|xi|. Candès [] proved that the solutions to () are equivalent to those

of () provided that the measurement matrices satisfy the restricted isometry property (RIP) [, ] with some definite restricted isometry constant (RIC)δs∈(, ), hereδsis

defined as the smallest constant satisfying

( –δs)x≤ Ax≤( +δs)x ()

for anys-sparse vectorsx∈RN.

(2)

However, the standard compressed sensing only considers the sparsity of the recovered signal, but it does not take into account any further structure. In many practical applica-tions, for example, DNA microarrays [], face recognition [], color imaging [], image annotation [], multi-response linear regression [], etc., the non-zero entries of sparse signal can be aligned or classified into blocks, which means that they appear in regions in a regular order instead of arbitrarily spread throughout the vector. These signals are called the block sparse signals and has attracted considerable interests; see [–] for more information.

Suppose that x∈RN is split into m blocks, x[],x[], . . . ,x[m], which are of length

d,d, . . . ,dm, respectively, that is,

x= [x , . . . ,xd

x[]

,xd+, . . . ,xd+d

x[]

, . . . ,xNdm+, . . . ,xN

x[m]

]T, ()

andN=mi=di. A vectorx∈RN is called blocks-sparse overI={d,d, . . . ,dm}ifx[i] is

non-zero for at mostsindicesi[]. Obviously,di=  for eachi, the block sparsity reduces

to the conventional definition of a sparse vector. Let

x,=

m

i=

I x[i] ,

whereI(x) is an indicator function that equals  ifx>  and  otherwise. So a blocks -sparse vectorxcan be defined byx,≤s, andx=

m

i=x[i]. Also, letsdenote

the set of all blocks-sparse vectors:s={x∈RN:x,≤s}.

To recover a block sparse signal, similar to the standardl-minimization, one seeks the

sparsest block sparse vector via the followingl/l-minimization [, , ]:

min

x∈RNx, s.t. y=Ax. ()

But the l/l-minimization problem is also NP-hard. It is natural to use the l/l

-minimization to replace thel/l-minimization [, , , ].

min

x∈RNx, s.t. y=Ax, ()

where

x,=

m

i=

x[i] . ()

To characterize the performance of this method, Eldar and Mishali [] proposed the block restricted isometry property (block RIP).

Definition (Block RIP) Given a matrixA∈Rn×N, for every blocks-sparsex∈RN over

I={d,d, . . . ,dm}, there exists a positive constant  <δs|I< , such that

(3)

then the matrixAsatisfies thes-order block RIP overI, and the smallest constantδs|I satisfying the above inequality () is called the block RIC ofA.

Obviously, the block RIP is an extension of the standard RIP, but it is a less stringent requirement comparing to the standard RIP [, ]. Eldaret al.[] proved that thel/l

-minimization can exactly recover any blocks-sparse signal when the measurement matri-cesAsatisfy the block RIP withδs|I< .. The block RIC can be improved, for

exam-ple, Lin and Li [] improved the bound toδs|I< ., and established another sufficient

conditionδs|I< . for exact recovery. So far, to the best of our knowledge, there is no paper that further focuses on improvement of the block RIC. As mentioned in [, , ], like RIC, there are several benefits for improving the bound onδs|I. First, it allows more measurement matrices to be used in compressed sensing. Secondly, for the same matrix

A, it allows for recovering a block sparse signal with more non-zero entries. Furthermore, it gives better error estimation in a general problem to recover noisy compressible signals. Therefore, this paper addresses improvement of the block RIC, we consider the following minimization for the inaccurate measurement,y=Ax+ewithe≤:

min

x∈RNx, s.t. yAx≤. ()

Our main result is stated in the following theorem.

Theorem  Suppose that thes block RIC of the matrix A∈Rn×N satisfies

δs|I<

≈.. ()

If xis a solution to(),then there exist positive constants C,Dand C,D,and we have

xx,Cσs(x),+D

s, ()

xx≤√C

sσs(x),+D, ()

where the constants C,Dand C,Ddepend only onδs|I,written as

C=

(

 –δs|I+ δs|I)

 –δs|I– δs|I

, ()

D=

 +δs|I

 –δs|I– δs|I

, ()

C=

(

 –δs|I+ δs|I)

( –δs|Iδs|I)(

 –δs|I– δs|I)

, ()

D=

 +δs|I(

 –δs|I+δs|I)

(

 –δs|Iδs|I)(

 –δs|I– δs|I)

(4)

andσs(x),denotes the best block s-term approximation error of x∈RN in l/lnorm,i.e.,

σs(x),:= inf

zs

xz,. ()

Corollary  Under the same assumptions as in Theorem,suppose that e= and x is

block s-sparse,then x can be exactly recovered via the l/l-minimization().

The remainder of the paper is organized as follows. In Section , we introduce thel,

robust block NSP that can characterize the stability and robustness of thel-minimization

with noisy measurement (). In Section , we show that the condition () can conclude the

l,robust block NSP, which means to implement the proof of our main result. Section 

is for our conclusions. The last section is an appendix including an important lemma.

2 Block null space property

Although null space property (NSP) is a very important concept in approximation the-ory [, ], it provides a necessary and sufficient condition of the existence and unique-ness of the solution to thel-minimization (), so NSP has drawn extensive attention for

studying the characterization of measurement matrix in compressed sensing []. It is natural to extend the classic NSP to the block sparse case. For this purpose, we introduce some notations. Suppose that x∈RN is an m-block signal, whose structure is like (),

we setS⊂ {, , . . . ,m}and bySC we mean the complement of the setSwith respect to

{, , . . . ,m},i.e.,SC={, , . . . ,m} \S. Letx

Sdenote the vector equal toxon a block index

setSand zero elsewhere, thenx=xS+xSC. Here, to investigate the solution to the model

(), we introduce thel,robust block NSP, for more information on other forms of block

NSP, we refer the reader to [, ].

Definition (l,robust block NSP) Given a matrixA∈Rn×N, for any setS⊂ {, , . . . ,m}

withcard(S)≤sand for allv∈RN, if there exist constants  <τ<  andγ > , such that vS≤

τ

svSC,+γAv, ()

then the matrixAis said to satisfy thel,robust block NSP of orderswithτandγ.

Our main result relies heavily on this definition. A natural question is what relationship between this robust block NSP and the block RIP. Indeed, from the next section, we shall see that the block RIP with condition () can lead to thel,robust block NSP, that is,

thel,robust block NSP is weaker than the block RIP to some extent. The spirit of this

definition is first to imply the following theorem.

Theorem  For any set S⊂ {, , . . . ,m}withcard(S)≤s,the matrix A∈Rn×N satisfies

the l,robust block NSP of order s with constants <τ< andγ > ,then,for all vectors x,z∈RN,

xz,≤

 +τ  –τ

z,–x,+ xSC,

+γ

s

(5)

Proof Forx,z∈RN, settingv=xz, we have

vSC,xSC,+zSC,,

x,=xSC,+xS,xSC,+vS,+zS,,

which yield

vSC,≤xSC,+z,x,+vS,. ()

Clearly, for anm-block vectorx∈RNis like (),l

-normxcan be rewritten as

x=x,=

m

i=

x[i] /. ()

Thus, we havevS,=vSandvS,≤

svS,. So thel,robust block NSP implies

vS,≤τvSC,+γsAv. ()

Combining () with (), we can get

vSC,≤

  –τ

xSC,+z,–x,

+γ

s

 –τAv.

Using () once again, we derive

v,=vSC,+vS,≤( +τ)vSC,+γ

sAv

≤ +τ

 –τ

xSC,+z,–x,

+γ

s

 –τ Av,

which is the desired inequality.

Thel,robust block NSP is vital to characterize the stability and robustness of thel/l

-minimization with noisy measurement (), which is the following result.

Theorem  Suppose that the matrix A∈Rn×N satisfies the l

,robust block NSP of order s with constants <τ < andγ > ,if xis a solution to the l/l-minimization with y= Ax+e ande≤,then there exist positive constants C,Dand C,D,and we have

xx,Cσs(x),+D

s, ()

xxC√

sσs(x),+D, ()

where

C=

( +τ)

 –τ ; D= γ

 –τ. ()

C=

( +τ)

 –τ ; D=

γ( +τ)

(6)

Proof In Theorem , bySdenote an index set ofslargestl-norm terms out ofmblocks

in x, () is a direct corollary of Theorem  if we notice that xSC, =σs(x), and

A(xx∗)≤. Equation () is a result of Theorem  forq=  in [].

3 Proof of the main result

From Theorem , we see that the inequalities () and () are the same as in () and () up to constants, respectively. This means that we shall only show that the condition () implies thel,robust block NSP for implementing the proof of our main result.

Theorem  Suppose that thes block RIC of the matrix A∈Rn×N obeys(),then the

matrix A satisfies the l,robust block NSP of order s with constants <τ< andγ > , where

τ= δs|I

 –δs|Iδs|I

, γ = 

 +δs|I

 –δs|Iδs|I

. ()

Proof The proof relies on a technique introduced in []. Suppose that the matrixAhas the block RIP withδs|I. Letvbe divided intomblocks whose structure is like (). Let

S=:Sbe an index set ofslargestl-norm terms out ofmblocks inv. We begin by dividing SCinto subsets of sizes,S

is the firstslargestl-norm terms inSC,Sis the nextslargest l-norm terms inSC, etc. Since the vectorvSis blocks-sparse, according to the block RIP,

for|t| ≤δs|I, we can write

AvS= ( +t)vS. ()

We are going to establish that, for anyj≥,

AvS,AvSj

δ

s|ItvSvSj. ()

To do so, we normalize the vectorsvSandvSj by settingu=:vS/vSandw=:vSj/vSj.

Then, forα,β> , we write

Au,Aw= 

α+β Au+w)

– Auw)  –

α–βAu. ()

By the block RIP, on the one hand, we have

Au,Aw ≤ 

α+β

( +δs|I)αu+w– ( –δs|I)βuw

– 

α+β

α–β( +t)u

= 

α+β

α(δs|It) +β(δs|I+t) + δs|I

.

Making the choiceα=(δs|I+t) δs|It,β=

(δs|It)

δs|It, we derive

Au,Aw

(7)

On the other hand, we also have

Au,Aw ≥ 

α+β

( –δs|I)αu+w– ( +δs|I)βuw

– 

α+β

α–β( +t)u

= – 

α+β

α(δs|It) +β(δs|I+t) + δs|I.

Making the choiceα= (δs|It)

δs|It,β=

(δs|I+t)

δs|It, we get

Au,Aw ≥–

δ

s|It. ()

Combining () with () yields the desired inequality (). Next, noticing thatAvS=A(v

j≥AvSj), we have

AvS=AvS,Av

j≥

AvS,AvSj

AvSAv+

j≥

δ

s|ItvSvSj

=vS

 +tAv+

δs|It

j≥ vSj

. ()

According to Lemma A. and the setting ofSj, we have

j≥

vSj,≤

j≥

svSj,+

s

vSj[] – vSj[s] 

≤√

svSC,+

s

vS[] 

≤√

svSC,+

vS. ()

Substituting () into () and noticing (), we also have

( +t)vS≤

 +tAv+

δ s|It

s vSC,+

δ s|It

vS,

that is,

vS≤

δs|It

s( +t) vSC,+

δs|It

( +t) vS+ 

 +tAv.

Let

f(t) =

δ s|It

(8)

then it is not difficult to conclude thatf(t) has a maximum pointt= –δ

s|Iin the closed interval [–δs|I,δs|I], so for|t| ≤δs|I, we have

f(t) =

δ s|It

 +t

δs|I

 –δs|I

. ()

Therefore,

vS≤

δs|I

 –δs|I

svSC,+

δs|I

 –δs|I

vS+

 –δs|IAv,

that is,

vS≤

δs|I

 –δ

s|Iδs|I

svSC,+

 +δs|I

 –δ

s|Iδs|I Av.

Here, we require

 –δ

s|Iδs|I> ,

δs|I

 –δ

s|Iδs|I

< , ()

which impliesδs|I<, that is,δs|I<√≈..

Remark  Substituting () into () and (), we can obtain the constants in Theorem .

Remark  Our result improves that of [], that is, the bound of block RICδs|Iis improved

from . to ..

4 Conclusions

In this paper, we gave a new bound on the block RICδs|I< ., under this bound, every blocks-sparse signal can be exactly recovered via thel/l-minimization approach

in the noiseless case and is stably recovered in the noisy measurement case. The result improves the bound on the block RICδs|Iin [].

Appendix

Lemma A. Suppose that v∈RN is split into m blocks,v[],v[], . . . ,v[m],which are of

length d,d, . . . ,dm,respectively,that is,

v= [v , . . . ,vd

v[]

,vd+, . . . ,vd+d

v[]

, . . . ,vNdm+, . . . ,vN

v[m]

]T. ()

Suppose that the m blocks in x are rearranged by nonincreasing order for which

(9)

Then

v[] +· · ·+ v[m] v[]+√· · ·+v[m]

m

+

m

v[] – v[m] 

. ()

Proof See Lemma . in [] for the details.

Acknowledgements

This work was supported by the Science Research Project of Ningxia Higher Education Institutions of China

(NGY2015152), National Natural Science Foundation of China (NSFC) (11561055), and Science Research Project of Beifang University of Nationalities (2016SXKY07).

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The two authors contributed equally to this work, and they read and approved the final manuscript.

Author details

1School of Mathematics and Information Science, Beifang University of Nationalities, Wenchang Road, Yinchuan, 750021,

China.2Editorial Department of University Journal, Beifang University of Nationalities, Wenchang Road, Yinchuan, 750021, China.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Received: 28 April 2017 Accepted: 10 July 2017

References

1. Lin, J, Li, S: Block sparse recovery via mixedl2/l1minimization. Acta Math. Sin. Engl. Ser.29(7), 1401-1412 (2013) 2. Donoho, D: Compressed sensing. IEEE Trans. Inf. Theory52(4), 1289-1306 (2006)

3. Candès, E, Romberg, J, Tao, T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory52(2), 489-509 (2006)

4. Candès, E, Tao, T: Near-optimal signal recovery from random projections: universal encoding strategies. IEEE Trans. Inf. Theory52(12), 5406-5425 (2006)

5. Natarajan, BK: Sparse approximate solutions to linear systems. SIAM J. Comput.24, 227-234 (1995)

6. Donoho, D, Huo, X: Uncertainty principles and ideal atomic decompositions. IEEE Trans. Inf. Theory47(4), 2845-2862 (2001)

7. Elad, M, Bruckstein, A: A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Trans. Inf. Theory48(9), 2558-2567 (2002)

8. Gribonval, R, Nielsen, M: Sparse representations in unions of bases. IEEE Trans. Inf. Theory49(5), 3320-3325 (2003) 9. Candès, E, Tao, T: Decoding by linear programming. IEEE Trans. Inf. Theory51(12), 4203-4215 (2005)

10. Candès, E: The restricted isometry property and its implications for compressed sensing. C. R. Math. Acad. Sci. Paris, Sér. I346, 589-592 (2008)

11. Baraniuk, R, Davenport, M, DeVore, R, Wakin, M: A simple proof of the restricted isometry property for random matrices. Constr. Approx.28, 253-263 (2008)

12. Parvaresh, F, Vikalo, H, Misra, S, Hassibi, B: Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays. IEEE J. Sel. Top. Signal Process.2(3), 275-285 (2008)

13. Elhamifar, E, Vidal, R: Block-sparse recovery via convex optimization. IEEE Trans. Signal Process.60(8), 4094-4107 (2012)

14. Majumdar, A, Ward, R: Compressed sensing of color images. Signal Process.90, 3122-3127 (2010)

15. Huang, J, Huang, X, Metaxas, D: Learning with dynamic group sparsity. In: IEEE 12th International Conference on Computer Vision, pp. 64-71 (2009)

16. Simila, T, Tikka, J: Input selection and shrinkage in multiresponse linear regression. Comput. Stat. Data Anal.52, 406-422 (2007)

17. Eldar, Y, Kuppinger, P, Bolcskei, H: Block-sparse signals: uncertainty relations and efficient recovery. IEEE Trans. Signal Process.58(6), 3042-3054 (2010)

18. Eldar, Y, Mishali, M: Robust recovery of signals from a structured union of subspaces. IEEE Trans. Inf. Theory55(11), 5302-5316 (2009)

19. Fu, Y, Li, H, Zhang, Q, Zou, J: Block-sparse recovery via redundant block OMP. Signal Process.97, 162-171 (2014) 20. Afdideh, F, Phlypo, R, Jutten, C: Recovery guarantees for mixed normlp1,p2block sparse representations. In: 24th

European Signal Processing Conference (EUSIPCO), pp. 378-382 (2016)

(10)

22. Karanam, S, Li, Y, Radke, RJ: Person re-identification with block sparse recovery. Image Vis. Comput. (2017). doi:10.1016/j.imavis.2016.11.015

23. Gao, Y, Peng, J, Yue, S: Stability and robustness of thel2/lq-minimization for block sparse recovery. Signal Process.137, 287-297 (2017)

24. Stojnic, M, Parvaresh, F, Hassibi, B: On the reconstruction of block-sparse signals with an optimal number of measurements. IEEE Trans. Signal Process.57(8), 3075-3085 (2009)

25. Baraniuk, R, Cevher, V, Duarte, M, Hegde, C: Model-based compressive sensing. IEEE Trans. Inf. Theory56(4), 1982-2001 (2010)

26. Mo, Q, Li, S: New bounds on the restricted isometry constantδ2k. Appl. Comput. Harmon. Anal.31, 460-468 (2011) 27. Lin, J, Li, S, Shen, Y: New bounds for restricted isometry constants with coherent tight frames. IEEE Trans. Signal

Process.61(3), 611-621 (2013)

28. Cohen, A, Dahmen, W, DeVore, A: Compressed sensing and bestk-term approximation. J. Am. Math. Soc.22(1), 211-231 (2009)

29. Pinkus, A: OnL1-Approximation. Cambridge University Press, Cambridge (1989)

30. Foucart, S, Rauhut, H: A Mathematical Introduction to Compressive Sensing. Springer, New York (2013) 31. Gao, Y, Peng, J, Yue, S, Zhao, Y: On the null space property oflq-minimization for 0 <q≤1 in compressed sensing.

References

Related documents

According to the results observed, we arrived at a conclusion that if a query result’s title, snippet, or url does not contain any query term, search engine should put

The video sequences were recorded using on-board cameras installed in vehicles and located at the dashboard area of the host vehicles producing road scenes in dynamic

The opportunity for JustKapital is substantial based on the amount of claims being assessed by legal firms. Following the Global financial crisis in 2008, many potential claims

There is evidence that VEGFR-2 is expressed in human melanoma cell lines and that anti- VEGFR-2 antibody disrupts tumor angiogenesis and inhibits growth of many tumors including

Since ION can not only detect more essen- tial proteins ignored by the eight other existing centrality methods but also exclude a large number of nonessential proteins which can ’ t

6/27/94: Il trovatore: Philadelphia, Empire Theatre (New York Grand Opera Company), matinée: 6/21/94 7/09/94: Aida: Philadelphia, Grand Opera House (Gustav Hinrichs Opera Company).

This research will use interviews and a survey to quantify the time saved by software developers writing compiled code on Linux when RDBs are used to find and fix software bugs..

Einer weit gestreuten Mitarbeiterbefragung (“Laienbefragung”) wurde hier die Befragung von ”Nachhaltigkeitsexperten” vorgezogen. Dabei wurde von der