• No results found

In-network reconstruction of jointly sparse signals with ADMM

N/A
N/A
Protected

Academic year: 2021

Share "In-network reconstruction of jointly sparse signals with ADMM"

Copied!
35
0
0

Loading.... (view fulltext now)

Full text

(1)

Politecnico di Torino

Porto Institutional Repository

[Proceeding] In-network reconstruction of jointly sparse signals with ADMM

Original Citation:

Matamoros, J.; Fosson, S.M.; Magli, E.; Antón-Haro, C. (2015). In-network reconstruction of jointly sparse signals with ADMM. In: European Conference on Networks and Communications (EUCNC 2015), Paris (France), jun 29 - jul 02, 0215.

Availability:

This version is available at :http://porto.polito.it/2624989/since: December 2015 Terms of use:

This article is made available under terms and conditions applicable to Open Access Policy Article ("["licenses_typename_cc_by_nc_nd_30_it" not defined]") , as described athttp://porto.polito. it/terms_and_conditions.html

Porto, the institutional repository of the Politecnico di Torino, is provided by the University Library and the IT-Services. The aim is to enable open access to all the world. Pleaseshare with ushow this access benefits you. Your story matters.

(2)

In-network reconstruction of jointly sparse signals

with ADMM

Javier Matamoros, Sophie M. Fosson, Enrico Magli and Carles Ant´on-Haro Centre Tecnol`ogic de Telecomunicacions de Catalunya (CTTC)

Politecnico di Torino

(3)

Motivation

I Reconstruction of jointly sparse signals with innovations I Sensing applications where information is acquired by a

geographically distributed set of nodes

I No need for a fusion center → In-network processing I Focus on efficient ADMM techniques with low signalling

(4)

Outline

Signal Model Centralized ADMM Distributed ADMM

Distributed ADMM with 1 bit Numerical results

(5)

Signal Model

I Consider N sensor nodes

I Observed signal at the ith sensor node yi= Aixi+ ηi ; i ∈ N

I xi∈ Rn : Signal of interest

I Ai∈ RM ×L with M  L : Measurement matrix

I ηi ∈ RL : Acquisition noise

I Assumption: JSM-1 model [Duarte06] xi = Ψzc+ Ωizi ; i ∈ N

I zc∈ Rn : Common signal with kc non-zero components

I zi∈ Rn : Innovation with ki non-zero components

(6)

Signal Model

I Consider N sensor nodes

I Observed signal at the ith sensor node yi= Aixi+ ηi ; i ∈ N

I xi∈ Rn : Signal of interest

I Ai∈ RM ×L with M  L : Measurement matrix

I ηi ∈ RL : Acquisition noise

I Assumption: JSM-1 model [Duarte06] xi = Ψzc+ Ωizi ; i ∈ N

I zc∈ Rn : Common signal with kc non-zero components

I zi∈ Rn : Innovation with ki non-zero components

(7)

Centralized ADMM

Optimization problem I Lasso formulation... min {xi,zi},zc 1 2 N X i=1  kyi− Aixik22+ τ1kzik1 + τ2kzck1  s.t. xi= zc+ zi; i = 1, . . . , N

I Promotes sparsity in the innovation component I Promotes sparsity in the common component

(8)

Centralized ADMM

Review

I Augmented cost function

min {xi,zi},zc 1 2 N X i=1  kyi− Aixik22+ τ1kzik1+ τ2kzck1 + ρ 2kxi− zi− zck 2 2  s.t. xi = zi+ zc; i = 1, . . . , N I Augmented Lagrangian L := 1 2 N X i=1 kyi− Aixik22+ N X i=1 τ1kzik1+ N τ2kzck1 + N X i=1 ρ 2kxi− zi− zck 2 2+ N X i=1 λTi (xi− zi− zc)

(9)

Centralized ADMM

Review

I Augmented cost function

min {xi,zi},zc 1 2 N X i=1  kyi− Aixik22+ τ1kzik1+ τ2kzck1 + ρ 2kxi− zi− zck 2 2  s.t. xi = zi+ zc; i = 1, . . . , N I Augmented Lagrangian L := 1 2 N X i=1 kyi− Aixik22+ N X i=1 τ1kzik1+ N τ2kzck1 + N X i=1 ρ 2kxi− zi− zck 2 2+ N X i=1 λTi (xi− zi− zc)

(10)

Centralized ADMM

Algorithm I ADMM iterates xi(t + 1) = (ρI + ATi Ai)−1(ATi yi+ ρ(zi(t) + zc(t)) − λi(t)) zc(t + 1), {zi(t + 1)} = arg min zc,{zi} L(t + 1) λi(t + 1) = λi(t) + ρ (xi(t + 1) − zi(t + 1) − zc(t + 1))

(11)

Centralized ADMM

Algorithm I ADMM iterates xi(t + 1) = (ρI + ATi Ai)−1(ATi yi+ ρ(zi(t) + zc(t)) − λi(t)) zc(t + 1), {zi(t + 1)} = arg min zc,{zi} L(t + 1) λi(t + 1) = λi(t) + ρ (xi(t + 1) − zi(t + 1) − zc(t + 1))

I Difficult to compute (Algorithm 1 in the paper) I Centralized solution!

(12)

Distributed ADMM

Distributed scenario y1 y2 y3 y4 y5 y6 y7

I Nodes only communicate with their neighbors (no fusion center)

(13)

Distributed ADMM

Distributed formulation I New formulation... min {xi,zi,ζi,ci} 1 2 N X i=1 kyi− Aixik22+ τ1kzik1+ τ2kζik1 s.t. xi= zi+ ζi; i ∈ N ζi = cj ; j ∈ Ni∪ {i}

(14)

Distributed ADMM

Distributed formulation

I Augmented cost function

min {xi,zi,ζi,ci} 1 2 N X i=1 kyi− Aixik22+ τ1kzik1+ τ2kζik1 +ρ 2kxi− zi− ζik 2 2+ θ 2 X j∈Ni kζi− cjk22 s.t. xi = zi+ ζi; i ∈ N ζi = cj; j ∈ Ni∪ {i}

I Difficult to solve by means of classical ADMM (i.e. with 2 primal iteration blocks)

I Instead, we propose to minimize the primal variables in a sequential fashion

(15)

Distributed ADMM

Distributed formulation

I Augmented cost function

min {xi,zi,ζi,ci} 1 2 N X i=1 kyi− Aixik22+ τ1kzik1+ τ2kζik1 +ρ 2kxi− zi− ζik 2 2+ θ 2 X j∈Ni kζi− cjk22 s.t. xi = zi+ ζi; i ∈ N ζi = cj; j ∈ Ni∪ {i}

I Difficult to solve by means of classical ADMM (i.e. with 2 primal iteration blocks)

I Instead, we propose to minimize the primal variables in a sequential fashion

(16)

Distributed ADMM

Algorithm I Primal iterates xi(t + 1) = (ρI + ATi Ai)−1(AiTyi+ ρ(zi(t) + ζi(t)) − λTi ) zi(t + 1) = Sτ1 ρ  (xi(t + 1) − ζi(t)) + λi(t) ρ  ζi(t + 1) = S τ2 ρ+θ|Ni|  1 ρ + θ|Ni| ρ (xi(t + 1) − zi(t + 1)) + θ X j∈Ni  cj(t) − µi,j(t) θ  + λi(t)   ci(t + 1) = 1 |Ni| X j:i∈Nj  ζj(t + 1) + µj,i(t) θ  ; I Dual iterates λi(t + 1) = λi(t) + ρ (xi(t + 1) − zi(t + 1) − ζi(t + 1)) µ (t + 1) = µ (t) + θ (ζ(t + 1) − c (t + 1)) ; j ∈ N

(17)

Distributed ADMM (cont’d)

Algorithm 1 2 3 4 5 6

(18)

Distributed ADMM (cont’d)

Algorithm 1 2 3 4 5 6 ζ6 (t + 1) ζ6(t + 1) ζ6 (t +1) ζ 6 (t + 1) ζ6 (t + 1)

I Compute {xi(t + 1), zi(t + 1), ζi(t + 1)} at each node I Broadcast {ζi(t + 1)} to your neighbors

(19)

Distributed ADMM (cont’d)

Algorithm 1 2 3 4 5 6 ζ1 (t + 1) ζ2(t + 1) ζ3 (t +1) ζ 4 (t + 1) ζ5 (t + 1)

I Compute {xi(t + 1), zi(t + 1), ζi(t + 1)} at each node I Broadcast {ζi(t + 1)} to your neighbors

(20)

Distributed ADMM (cont’d)

Algorithm 1 2 3 4 5 6 c6 (t + 1) c6(t + 1) c6 (t +1) c 6 (t + 1) c6 (t + 1)

I Compute {xi(t + 1), zi(t + 1), ζi(t + 1)} at each node I Broadcast {ζi(t + 1)} to your neighbors

I Compute {ci(t + 1)} at each node I Broadcast {ci(t + 1)} to your neighbors

(21)

Distributed ADMM (cont’d)

Algorithm 1 2 3 4 5 6 c1 (t + 1) c2(t + 1) c3 (t +1) c 4 (t + 1) c5 (t + 1) I Compute {xi(t + 1), zi(t + 1), ζi(t + 1)} I Broadcast {ζi(t + 1)} to your neighbors I Compute {ci(t + 1)} at each node I Broadcast {ci(t + 1)} to your neighbors I Compute {λi(t + 1), µj,i(t + 1)} at each node

(22)

Distributed ADMM with 1 bit

I DADMM requires the exchange of analog values

I We propose to modify the updates of {ζi(t + 1)} and {ci(t + 1)} as follows: ζi(t + 1) = ζi(t) −  sign  gζt i  ci(t + 1) = ci(t) −  sign  gct i  I {gζt

i, gcti} stand for the subgradients with respect to {ci} and

{ζi} at time t

(23)

Distributed ADMM with 1 bit

I DADMM requires the exchange of analog values I We propose to modify the updates of {ζi(t + 1)} and

{ci(t + 1)} as follows: ζi(t + 1) = ζi(t) −  sign  gζt i  ci(t + 1) = ci(t) −  sign  gct i  I {gζt

i, gcti} stand for the subgradients with respect to {ci} and

{ζi} at time t

(24)

Distributed ADMM with 1 bit

I DADMM requires the exchange of analog values I We propose to modify the updates of {ζi(t + 1)} and

{ci(t + 1)} as follows: ζi(t + 1) = ζi(t) −  sign  gζt i  ci(t + 1) = ci(t) −  sign  gct i  I {gζt

i, gcti} stand for the subgradients with respect to {ci} and

{ζi} at time t

(25)

Distributed ADMM with 1 bit

I DADMM requires the exchange of analog values I We propose to modify the updates of {ζi(t + 1)} and

{ci(t + 1)} as follows: ζi(t + 1) = ζi(t) −  sign  gζt i  ci(t + 1) = ci(t) −  sign  gct i  I {gζt

i, gcti} stand for the subgradients with respect to {ci} and

{ζi} at time t

(26)

Numerical results

0 50 100 150 200 250 0 0.2 0.4 0.6 0.8 1 Iteration number MSE Centralized

(27)

Numerical results

0 50 100 150 200 250 0 0.2 0.4 0.6 0.8 1 Iteration number MSE Centralized DADMM

(28)

Numerical results

0 50 100 150 200 250 0 0.2 0.4 0.6 0.8 1 Iteration number MSE Centralized DADMM DADMM-1bit ( = 0.01)

(29)

Numerical results

0 50 100 150 200 250 0 0.2 0.4 0.6 0.8 1 Iteration number MSE Centralized DADMM DADMM-1bit ( = 0.01) DADMM-1bit ( = 0.05)

(30)

Numerical results

0 50 100 150 200 250 0 0.2 0.4 0.6 0.8 1 Iteration number MSE Centralized DADMM DADMM-1bit ( = 0.01) DADMM-1bit ( = 0.05) DADMM-1bit ( = 0.1)

(31)

Numerical results (cont’d)

0 50 100 150 200 250 0 0.2 0.4 0.6 0.8 1 Iteration number MSE

Comm. signal (Centralized) Indiv. signal (Centralized)

(32)

Numerical results (cont’d)

0 50 100 150 200 250 0 0.2 0.4 0.6 0.8 1 Iteration number MSE

Comm. signal (Centralized) Indiv. signal (Centralized) Comm. signal (DADMM) Indiv. signal (DADMM)

(33)

Numerical results (cont’d)

0 50 100 150 200 250 0 0.2 0.4 0.6 0.8 1 Iteration number MSE

Comm. signal (Centralized) Indiv. signal (Centralized) Comm. signal (DADMM) Indiv. signal (DADMM) Comm. signal. (DADMM-1bit) Indiv. signal (DADMM-1bit)

(34)

Conclusions

I We have addressed the problem of reconstruction of jointly sparse signals with innovations

I 1 Centralized ADMM solution and 2 distributed ADMM solution for in-network reconstruction have been proposed I Distributed versions are shown to converge to the centralized

ADMM

I The 1 bit version is shown to reduce the number of transmitted bits significantly

(35)

Thank you!

Questions?

References

Related documents

OFE is a complex economic problem that involves decisions about how to run the trials (whole field, part of the field), how to assign the treatments (what rates to explore in

and Laura Zakaras (eds), Confidentiality, Transparency, and the US Civil Justice System (Oxford University Press 2012) 170-71. It is true that in any event the party with less

CISIA (Inter-district Co-ordination of Ministry of Justice Automated Computer Systems) made its highly competent technicians available, as well as hardware and software to operate

Unlike classical traffic analysis, where users are assumed to be connected uniformly, our proposed method employs a scale-free network to model the behavior of telephone users..

The average time to run a single transpose matrix multiplication based on the percent cheating (100%-CPU Cap %) and the % time the cheating lasts.. As the % cheating or the % time

This project discussed about current inventory control management in Petite Planner store and developed a mobile phone application according to the process

Several interesting facts emerge; (1) all four sub-CFSIs fluctuates substantially over time, with their peaks occurred in the periods corresponding to crisis events;

Knowledge management process capabilities were conceptualized as four dimensional constructs: knowledge acquisition, knowledge conversion, knowledge application, and