• No results found

Continuous Iteratively Reweighted Least Squares Algorithm for Solving Linear Models by Convex Relaxation

N/A
N/A
Protected

Academic year: 2020

Share "Continuous Iteratively Reweighted Least Squares Algorithm for Solving Linear Models by Convex Relaxation"

Copied!
11
0
0

Loading.... (view fulltext now)

Full text

(1)

ISSN Online: 2160-0384 ISSN Print: 2160-0368

DOI: 10.4236/apm.2019.96024 Jun. 27, 2019 523 Advances in Pure Mathematics

Continuous Iteratively Reweighted Least

Squares Algorithm for Solving Linear Models by

Convex Relaxation

Xian Luo, Wanzhou Ye

Department of Mathematics, College of Science, Shanghai University, Shanghai, China

Abstract

In this paper, we present continuous iteratively reweighted least squares algo-rithm (CIRLS) for solving the linear models problem by convex relaxation, and prove the convergence of this algorithm. Under some conditions, we give an error bound for the algorithm. In addition, the numerical result shows the efficiency of the algorithm.

Keywords

Linear Models, Continuous Iteratively Reweighted Least Squares, Convex Relaxation, Principal Component Analysis

1. Introduction

Low-dimensional linear models have broad applications in data analysis prob-lems such as computer vision, pattern recognition, machine learning, and so on

[1][2][3]. In most of these applications, the data set is noisy and contains num-bers of outliers so that they are distributed in higher dimensional. Meanwhile, principal component analysis is the standard method [4] for finding a low-dimensional linear model. Mathematically, the problem can be present as

2

min subject to is an orthoprojector and

x

x x tr d

χ

− Π Π Π =

(1)

where χ is the data set consisting of N points in D, xD, a target

di-mension d

{

1,2, , D−1

}

, 2 denotes the 2

l norm on vectors, and tr re-fers to the trace.

Unfortunately, if the data set contains a large number of noise in the inliers and a substantial number of outliers, these nonidealities points can interfere

How to cite this paper: Luo, X. and Ye, W.Z. (2019) Continuous Iteratively Re-weighted Least Squares Algorithm for Solving Linear Models by Convex Relaxa-tion. Advances in Pure Mathematics, 9, 523-533.

https://doi.org/10.4236/apm.2019.96024 Received: May 13, 2019

Accepted: June 24, 2019 Published: June 27, 2019

Copyright © 2019 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

(2)

DOI: 10.4236/apm.2019.96024 524 Advances in Pure Mathematics with the linear models. To guard the subspace estimation procedure against out-liers, statistics have proposed to replace the l2 norm [5][6] [7] with l1 norm that is less sensitive to outliers. This idea leads the following optimization prob-lem

min subject to is an orthoprojector and

x∈χ x x tr d

− Π Π Π =

(2)

The optimization problem (2) is not convex, and we have no right to expect that the problem is tractable [8][9]. Wright [10] proved that most matrices can be efficiently and exactly recovered from most error sign-and-support patterns by solving a simple convex program, for which they give a fast and provably convergent algorithm. Later, Candes [11] presented that under some suitable assumptions, it is possible to recover the subspace by solving a very convenient convex program called Principal Component Pursuit. Base on convex optimiza-tion, Lerman [12] proposed to use a relaxation of the set of orthogonal projec-tors to reach the convex formulation, and give the linear model as follows

min subject to 0 and

x∈χ x Px P I trP d

− =

  (3)

where the matrix P is the relaxation of orthoprojector Π, whose eigenvalues lie

in the interval

[ ]

0,1 form a convex set. The curly inequality  denotes the semidefinite order: for symmetric matrices A and B, we write A B if and only if B A is positive semidefinite, the problem (3) is called REAPER.

To obtain a d-dimensional linear model from a minimizer of REAPER, we need to consider the auxiliary problem

1

min S subject to is an orthoprojector and

x∈χ P tr d

− Π Π Π =

 (4)

where P is an optimal point of REAPER, ⋅ S1 denotes Schatten 1-norm, in other words, the orthoprojector Π is closest to P in the Schatten 1-norm, and the range of Π is the linear model we want. Fortunately, Lerman has given the error bound between the d-dimensional subspace with the d-dimensional orthoprojector Π in Theorem 2.1 [12].

In this paper, we improve that the algorithm calls continuous iteratively re-weighted least squares algorithm for solving REAPER (3), and under a weaker assumption on the data set, we can prove the algorithm is convergent. In the ex-periment part, we compare the algorithm with the IRLS algorithm [12].

The rest of the paper is organized as follows. In Section 2, we develop the CIRLS algorithm for solving problem (3). We present a detail convergence anal-ysis for the CIRLS algorithm in Section 3. An efficient numerical is reported in Section 4. Finally, we conclude this paper in Section 5.

2. The CIRLS Algorithm

(3)

DOI: 10.4236/apm.2019.96024 525 Advances in Pure Mathematics In the next section, we prove the sequence

{ }

( )k 1

k

P

≥ is convergent to Pδ.

Furthermore, we also present the Pδ satisfied the bound with the optimal point

P of REAPER.

3. Convergence of CIRLS Algorithm

In this section, we prove that the sequence

{ }

( )

1

k k

P

≥ generated by Algorithm 1

is convergent to Pδ and we provide the Pδ satisfied the bound with the

op-timal point P of REAPER. Firstly, we start from the Lemma prepare for the proof of the following theorem.

Lemma 1 ([12], Theorem 4.1) Assume that the set χ of observations does

not lie in the union of two strict subspaces of D. Then the iterates of IRLS

Al-gorithm with ε =0 converge to a point Pδ that satisfies the constraints of the

REAPER problem. Moreover, the objective value at Pδ satisfies the bound

1 ,

2

x χ x P xδ x χ x P x

δ χ

∈ ∈

− − − ≤

 (5)

where P is an optimal point of REAPER.

In lemma 1, under the assumption that the set χ of observations does not lie

in the union of two strict subspaces of D, they consider the convergence of the

IRLS algorithm. However, verify whether a data set satisfies this assumption re-quires amounts of computation in theory, so we give a assumption that is easier to verify in following theorem.

Theorem 1 Assume that the set χ of observations satisfies span

{ }

χ =D,

if Pδ is the limit point of the sequence

{ }

( )k 1

k

P

≥ generated by CIRLSδ

algo-rithm, then Pδ is an optimal point of the following optimization problem

{ } { }

2

: :

1 min

2

subject to 0 and = ,

x x Px x x Px

x Px x Px

P I trP d

δ δ

δ δ

− ≥ − <

 

 − +  + 

 

(6)

and satisfies

1 ,

2

x χ x P xδ x χ x P x

δ χ

∈ ∈

− − − ≤

 (7)

where P is an optimal point of REAPER(3).

(4)

DOI: 10.4236/apm.2019.96024 526 Advances in Pure Mathematics { } { } 2 : : 1 min 2

subject to 0 and ,

x x Px x x Px

x Px x Px

P I trP d

δ δ

δ δ

− ≥ − <

    − + +      =

  (8)

then, we define iterative point P(k+1) of the optimization problem

( 1) 2

,

: arg min k subject to 0 and .

k

x x

P δ x Px P I trP d

χω

+

=

−   = (9)

where , :

{

1 ( )

}

max ,

kx k

k x P x

δ ω

δ =

− .

For convenience, let Q I P= − , and the optimization model(9) convert into

( 1) 2

, 0

: arg min k .

k

x Q I x trQ D d

Q δ Qx

χ ω + ∈ = − =

  (10)

where , :

{

1 ( )

}

max ,

kx k

k Q x

δ ω

δ

= .

Similarly, we convert the optimization model (8) into

( )

{ } { } 2 : : 1 min min 2

subject to 0 and .

x Qx x Qx

Qx

F Q Qx

Q I trQ D d

δ δ δ δ δ ≥ <       = + +    = −

  (11)

Next we prove the convergence of P( )k , consider the Huber-like function

( )

2 2 1 0 2 , 1 2 k k k k k x y

H x y

x y y y δ δ δ δ δ   

+ ≤ <

      =     +       (12)

since 1 2

2

x y x

y

 

+ ≥

 

  , for any y>0, then

(

,

)

(

,

)

.

k k

Hδ x yHδ x x (13)

holds for any y>0, x R. We introduce the convex function

( )

(

)

{ } { } 2 : : : , 1 , 2 k k k k x k

x Qx x Qx k

F Q H Qx Qx

Qx Qx

δ δ

χ

χ δ χ δ

δ δ

∈ ≥ ∈ <

=     = + +    

(14)

note that F is continuously differentiable at each matrix Q, and the gradient is

( )

{ } { }

{

}

T T : : T . max , k k k

x Qx x Qx k

x k Qxx Qxx F Q Qx Qxx Qx δ

χ δ χ δ

χ

δ

δ

∈ ≥ ∈ <

∇ = +

=

(15)

(5)

DOI: 10.4236/apm.2019.96024 527 Advances in Pure Mathematics ( )

(

)

( )

(

)

( )

{

}

( ) ( ) ( )

{

}

2 2 : : , : ,

1 1 ,

2 2 k k k k k k k k x k k k k

x Q x x Q x

G Q Q

H Qx Q x

Qx Qx Q x Q x δ δ χ

χ δ χ δ

δ δ

∈ ≥ ∈ <

=     = + + +        

(16) then ( )

(

)

{

T( )

}

, . max , k k k x k Qxx G Q Q

Q x

δ

χ

δ

∇ =

(17)

By the definition of G Q Qk

(

, ( )k

)

δ we know that

( ) ( )

(

,

)

(

( ) , ( )

)

( )

( ) ,

k k k

k k k k k

x

G Q Qδ Hδ Q x Q x F Qδ χ

=

= (18)

and ( ) ( )

(

)

{

( ) ( )

}

( )

( )

T , , max , k k k

k k k

k

x k

Q xx

G Q Q F Q

Q x

δ δ

χ δ

∇ =

= ∇ (19)

it is obvious that Gk

(

,Q( )k

)

δ ⋅ is a smooth quadratic function, we may relate

( )

(

,

)

k

k

Gδ ⋅ Q through the expansion in Q( )k as follows

( )

(

)

( )

( ) ( )

( )

( )

( )

( )

( )

(

( )

)

, ,

1 , ,

2

k k k

k k k k

k k k

k

G Q Q F Q Q Q F Q Q Q C Q Q Q

δ = δ + − ∇ δ

+ − − (20)

where

( )

( ) ( )

{

}

T max , k k k x k xx C Q Q x χ

δ

=

is the Hessian matrix of

(

, ( )

)

k

k

G Q Qδ .

By the definition of G Q Qk

(

, ( )k

)

δ we know that F Qδk

( )

=G Q Qδk

(

,

)

,

com-bines with (13) we have

( )

(

)

( )

(

)

( )

(

)

, , , , k k k k x k x k

F Q H Qx Qx

H Qx Q x

G Q Q

δ δ χ δ χ δ ∈ ∈ = ≤ =

(21)

and since the optimization model(10) is equivalent to the model

( 1)

(

( )

)

0

: arg min k , ,

k k

Q I trP D d

Q + G Q Qδ = −

=

  (22)

then we have the monotonicity property

( )

(

1

)

(

( 1), ( )

)

(

( ), ( )

)

( )

( ) .

k k k k

k k k k k k

F Qδ + ≤G Qδ + QG Q Qδ =F Qδ (23)

We note that when x≥0, we have

( )

, 1 2

0 2 k k k k k x x

Hδ x x x x

δ δ δ δ ≥  

=  +≤ <

 

 

(6)

DOI: 10.4236/apm.2019.96024 528 Advances in Pure Mathematics

( )

1 1 2 1 1 1

, 1 0

2 k k k k k x x

Hδ x x x x

δ δ δ δ + + + + + ≥  

=  +≤ <

 

 

(25)

since δk+1<δk, on the one hand, when x≥δk+1, Hδk+1

( )

x x, =x holds, and

( )

,

k

Hδ x xx holds for any x R∈ , therefore the inequality

( )

( )

1 , ,

k k

Hδ+ x xHδ x x holds for x≥δk+1. On the other hand, when xk+1,

( )

1 2 1 1 1 , 2 k k k x Hδ+ x x δ δ +

+

 

= +

 , and

( )

2 1 , 2 k k k x Hδ x x δ δ

 

= +

  holds, then we have

( )

( )

(

)

1 2 2 1 1 2 1 1 1 , , 2

1 1 .

2

k k k k

k k

k k

k k

x x H x x H x x

x

δ δ δ δ δ δ

δ δ δ δ + + + + +   − = − + −     = −   (26)

since δkk+1, and xk+1, Hδk

( )

x x, −Hδk+1

( )

x x, >0 holds for all x≥0, therefore ( )

(

)

(

( ) ( )

)

( ) ( )

(

)

( )

(

)

1 1

1 1 1

1 1 1 , , , k k k k

k k k

x

k k

x k

F Q H Q x Q x H Q x Q x F Q δ δ χ δ χ δ + + + + + ∈ + + ∈ + = ≤ =

(27)

and according to (23), we have the following result

( )

(

)

( )

( ) 1 1 . k k k k

Fδ + Q + ≤F Qδ (28)

Since

( 1)

(

( )

)

0

arg min k , ,

k k

Q I trQ D d

Q + G Q Qδ = −

=

  (29)

combined with the convex optimization variational inequalities with some con-straints, we have

( 1)

(

( 1) ( )

)

0 Q Qk , G Qk k ,Qk 0 Q I trQ D d, , δ

+ +

≤ − ∇ ∀   = − (30)

base on (20), we have the equation ( )

(

,

)

( )

( )

( )

( )

(

( )

)

,

k k

k k k k

k

G Q Qδ F Qδ C Q Q Q

∇ = ∇ + − (31)

then we have

( ) ( 1)

( )

( )

( )

( )

(

( 1) ( )

)

0 k k , k k k k k ,

k

Q Q F Qδ C Q Q Q

+ +

≤ − ∇ + − (32)

therefore ( ) ( )

(

)

( )

( )

( ) ( )

( )

( ) ( ) ( )

( )

( )

(

( ) ( )

)

( )

( )

( ) ( )

( )

( )

(

( ) ( )

)

1 1 1 1 1 1 , , 1 , 2 , k k k k k k

k k k k

k k k k k

k

k k k k k k

k

G Q Q

F Q Q Q F Q Q Q C Q Q Q

F Q Q Q C Q Q Q

(7)

DOI: 10.4236/apm.2019.96024 529 Advances in Pure Mathematics

( ) ( )

( )

( )

(

( ) ( )

)

( )

( )

( ) ( )

( )

( )

(

( ) ( )

)

1 1

1 1

1 ,

2

1 , ,

2

k

k

k k k k k

k k k k k k

k

Q Q C Q Q Q

F Q Q Q C Q Q Q

δ

δ

+ +

+ +

+ − −

≤ − − − (33)

and according to(23), then we get the inequality

( )

(

1

)

( )

( ) 1 ( 1) ( ),

( )

( )

(

( 1) ( )

)

,

2

k k

k k k k k k k

k

F Qδ + ≤F Qδ − Q + −Q C Q Q + −Q (34)

add to 1

(

( 1)

)

(

( 1)

)

k k

k k

Fδ+ Q F Qδ

+ + , then we have

( ) ( )

( )

( )

(

( ) ( )

)

( )

( )

(

( )

)

1

1 1 1

1 , .

2 k k k

k k k k k k k

Q Q C Qδ Q Q F Qδ Fδ+ Q

+ + + (35)

Let m: max= k1

{

δk,maxxχ

{ }

x

}

, since Q(k+1) , Q( )k and C Qk

( )

( )k are all

symmetric matrix, then the inequality

( ) ( )

( )

( )

(

( ) ( )

)

( ) ( )

(

)

( )

( )

1 1

2 1

,

,

k k k k k

k

k k k

k

Q Q C Q Q Q tr Q Q C Q

+ +

+

− −

 

 

(36)

holds, in addition, 0 Q(k+1) I

  , 0Q( )k I, thus

(

Q(k+1)Q( )k

)

2

is positive semidefinite matrix, and Q x( )k Q( )k x x , then we have

( )

{

}

{

}

max , k max ,

k Q x k x m

δ

δ

≤ , therefore

( )

( )

{

( )

}

T

T T

min

1 1 ,

max ,

k

k k

x k x x

xx

C Q xx xx I

m m

Q x

χ χ χ

λ δ

∈ ∈ ∈

 

=

 

(37)

so we have

( ) ( )

(

1

)

2

( )

( ) T

(

( 1) ( )

)

2 min

1 ,

k k k k k

k

x

Q Q C Q xx Q Q

m χ

λ

+ +

 

 (38)

thus we can get the inequality as follows

( ) ( )

(

1

)

2

( )

( ) T ( 1) ( ) 2

min

1 .

k k k k k

k F

x

tr Q Q C Q xx Q Q

m χ

λ

+ +

 

− ≥

 (39)

Let T

min

1

2m x χxx

µ λ

 

=

, then µ>0, and we have

( ) ( )

(

( )

( )

(

( )

)

)

1

2

1 1 1 ,

k k

k k k k

F

Q Q µ F Qδ Fδ+ Q

+ + (40)

since F Qδk

( )

≥0, so we have

( ) ( )

( )

( )

1

2

1 1

1

1 ,

k k

F

k Q Q µF Qδ

∞ + =

− ≤ < +∞

(41)

therefore, we get the following limits

( 1) ( )

lim k k 0.

k Q Q F

+

→∞ − = (42)

Since the set

{

Q: 0 Q I trQ D d, = −

}

is bounded and closed convex set, and Q(k+1)

{

Q: 0 Q I trQ D d k, = − , =1,2,

}

  , then the sequence

( )

{ }

k 1

k

Q

≥ exist convergent subsequences, we suppose

( )

{ }

ki 1

i

(8)

DOI: 10.4236/apm.2019.96024 530 Advances in Pure Mathematics subsequences and

( )

{ }

lim ki .

i

Q= →+∞ Q (43) Since for any

i

1

, we have

( ) ( ) ( ) ( )

( ) ( ) ( )

1 1

1 ,

i i i i

i i i

k k k k

F F

k k k

F F

Q Q Q Q Q Q Q Q Q Q

+ +

+

− = − + −

≤ − + −

 

 (44)

and

( 1) ( )

lim k k 0,

k Q Q F

+

→∞ − = (45)

so we have

( 1)

lim ki 0,

i Q QF

+

→+∞ −  = (46)

in other words

( 1)

lim ki .

i Q Q

+

→+∞ =  (47)

According to the definition of Q(ki+1) and (10), for any

: 0 ,

Q  Q I trQ D d= − , we have

( 1)

( )

( )

( )

( )

(

( 1) ( )

)

0 i , i i i i .

ki i

k k k k k

k

Q Q + F Qδ C Q Q + Q

≤ − ∇ + − (48)

Since ( )

( )

{

( ) ( )

}

( ) ( )

{

}

{

}

( )

T T T lim lim max , lim max , max , i i ki i i i i i k k k

i i x

k k k i x k x Q xx F Q Q x Q xx Q x Qxx Qx F Q δ χ χ χ δ δ δ δ →+∞ →+∞ ∈ →+∞ ∈ ∈ ∇ = = = = ∇

 (49) and ( )

( )

(

( ) ( )

)

( )

{

}

(

( ) ( )

)

( )

{

}

( ) ( ) ( ) ( ) ( ) ( ) 1 T 1 T 1 T 1 1 T max , max , 1

i i i

i i i i i i i i i i i i i i

k k k

k F k k k x k F k k F k F x k k k F F x k k k F F x

C Q Q Q

xx Q Q Q x xx Q Q Q x xx Q Q xx Q Q

(9)

DOI: 10.4236/apm.2019.96024 531 Advances in Pure Mathematics as well as (ki 1) ( )ki 0

(

)

F

Q + Q i→ +∞ , so we have

( )

( )

(

( 1) ( )

)

lim i i i 0,

i

k k k

k

i C Q Q Q F

+

→+∞ − = (51)

thus

( )

( )

(

( 1) ( )

)

lim i i i 0.

i

k k k

k

i C Q Q Q

+

→+∞ − = (52)

Taking the limit at both ends of the inequality (48) and using the continuity of the inner product, for any Q: 0 Q I trQ D d, = − , we have

( )

0≤ Q Q F Q− ,∇ δ  , (53)

the variational inequalities demonstrate that

( )

arg min subject to 0 , ,

Q= F Qδ ≤ ≤Q I trQ D d= − (54)

it means that the limit points Qδ of any convergent subsequence generated by

( )

{ }

1

k k

Q

≥ is an optimal point of the following optimization problem

( )

arg min subject to 0 , .

Qδ = F Qδ ≤ ≤Q I trQ D d= − (55)

Now we set Pδ = −I Qδ and define 0

( )

:

(

)

x

F P I P x

χ ∈

=

with 0≤ ≤P I

and trP d= . And we define

( )

:

(

(

)

,

(

)

)

x

F Pδ Hδ I P x I P x

χ

=

− − , and

( )

0

: arg min

P = F P with respect to the feasible set 0≤ ≤P I , and trP d= , then we have

( )

arg min , subject to 0 , .

Pδ = F Pδ ≤ ≤P I trQ d= (56)

According to the lemma 3.1, we have

( )

( )

0 0 1

0 ,

2

F Pδ F P δ χ

≤ − (57)

where χ denotes the number of elements of χ.

4. Numerical Experiments

In this section, we present a numerical experiment to show the efficiency of the CIRLS algorithm for solving problem (3). We compare the performance of our algorithm with IRLS on the data generated from the following model. In the test, we randomly choose Nin inliers sampled from the d-dimensional Multivariate Normal distribution N

(

0,ΠL

)

on subspace L and add Nout outliers sampled

from a uniform distribution on

[ ]

0,1D , we also add a Gaussian noise

(

0, 2

)

N ε I . The experiment is performed in R and all the experimental results were averaged over 10 independent trials.

The parameters were set the same as [12] that ε=10−15, ε =0.01 and 10

10

δ =, we choose

η

from

[ ]

0,1 and we set 1

2

η = in general. All the

(10)
[image:10.595.208.539.91.346.2]

DOI: 10.4236/apm.2019.96024 532 Advances in Pure Mathematics Table 1. The iterative number of the CIRLS and IRLS for different dimension.

D Methods d Nin Nout n (iterative)

10 IRLS 2 50 200 22

CIRLS 2 50 200 13

50 IRLS 10 50 200 51

CIRLS 10 50 200 30

100 IRLS 10 50 200 51

CIRLS 10 50 200 31

150 IRLS 10 50 200 53

CIRLS 10 50 200 33

200 IRLS 10 50 200 53

CIRLS 10 50 200 35

250 IRLS 10 50 200 51

CIRLS 10 50 200 30

300 IRLS 10 70 280 49

CIRLS 10 70 280 30

(

iterative

)

n with the two algorithms through the data sets and the given para-meters. The results are shown in Table 1.

We focus on the convergence speed of the two algorithms. Table 1 reports the numerical results of the two algorithms for different space dimension. From the result, we can see that in different dimension, the CIRLS algorithm performs better than IRLS algorithm in convergent efficiency.

5. Conclusion

In this paper, we propose an efficient continuous iteratively reweighted least squares algorithm for solving REAPER problem, and we prove the convergence of the algorithm. In addition, we present a bound between the convergent limit and the optimal point of REAPER problem. Moreover, in the experiment part, we compare the algorithm with the IRLS algorithm to show that our algorithm is convergent and performs better than IRLS algorithm in the rate of convergence.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this pa-per.

References

[1] Torre, F.D. and Black, M.J. (2001) Robust Principal Component Analysis for Com-puter Vision. Proceedings of Eighth IEEE International Conference on Computer Vision, Vancouver, 7-14 July 2001, 362-369.

(11)

DOI: 10.4236/apm.2019.96024 533 Advances in Pure Mathematics [2] Wanger, A., Wright, J. and Ganesh, A. (2009) Towards a Practical Face Recongni-tion System: Robust RegistraRecongni-tion TilluminaRecongni-tion by Sparse RepresenRecongni-tion. 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL,20-25 June 2009, 20-25. https://doi.org/10.1109/CVPR.2009.5206654

[3] Ho, J., Yang, M., Lim, J., Lee, K. and Kriegman, D. (2003) Clustering Appearances of Objects under Varying Illumination Conditions. 2003 IEEE Computer Vision and Pattern Recognition, Madison, WI,18-20 June 2003, 11-18.

[4] Jolliffe, I.T. (2002) Principal Component Analysis. Journal of Marketing Research, 25, 513.

[5] Watson, G.A. (2002) On the Gauss-Newton Method for l1 Orthogonal Distance Regression. IMA Journal of Numerical Analysis, 22, 345-357.

https://doi.org/10.1093/imanum/22.3.345

[6] Ding, C., Zhou, D., He, X. and Zha, H. (2006) R1-PCA: Rotational Invariant L1-Norm Principal Component Analysis for Robust Subspace Factorization. The 23rd International Conference on Machine Learning, 23, 281-288.

https://doi.org/10.1145/1143844.1143880

[7] Kwak, N. (2008) Principal Component Analysis Based on L1-Norm Maximization. IEEE Transactions on Pattern Analysis and Machine Intelligence, Pittsburgh, PA, 25-29 June 2006, 1672-1680. https://doi.org/10.1109/TPAMI.2008.114

[8] Maronna, R.A., Martin, D.R. and Yohai, V.J. (2006) Robust Statistics. Wiley Series in Probability and Statistics. Wiley, Chichester.

https://doi.org/10.1002/0470010940

[9] Zhang, T., Szlam, A. and Lerman, G. (2009) Median K-Flats for Hybrid Linear Modeling with Many Outliers. IEEE International Conference on Computer Vision Workshops, 12, 234-241.

[10] Wright, J., Peng, Y.G. and Ma, Y. (2009) Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices by Convex Optimization. 2009 23rd Annual Conference on Neural Information Processing Systems, Vancouver, Canada, 7-10 December 2009, 2080-2088.

[11] Candes, E.J., Li, X., Ma, Y. and Wright, J. (2011) Robust Principal Component Analysis? Journal of the ACM, 58, Article No. 11.

https://doi.org/10.1145/1970392.1970395

Figure

Table 1. The iterative number of the CIRLS and IRLS for different dimension.

References

Related documents

Tyrimo uždaviniai: 1) aptarti Lietuvos būsto politikos sampratą bei gerovės valstybės modelį; 2) išanalizuoti būsto politikos teisinį reglamentavimą bei

Divergence between official estimates and lived experiences arises from numerous sources, including the difference between reported sticker price and the real costs of food,

The Civil Law (Wrongs) (Proportionate Liability and Professional Standards) Amendment Act 2004 allows for apportionment for claims involving purely economic loss or damage to

[r]

Personal Injury or Property Damage arising directly or indirectly out of or caused by, through, or in connection with any defect or deficiency in Your Products of which You or

The next section of this paper analyses financial performance by considering ratio analysis in order to identify anomalies and focus attention on matters of significant

In this part, we analyze the optimal solution of the proposed system under the following three assumptions, which will provide an optimal bound to evaluate future solutions able

This study discovers the phytochemical content of seeds and fruits of the three cucurbits used in this study, it also helps in getting the phytochemical characteristics of the