• No results found

A condition number for the tensor rank decomposition

N/A
N/A
Protected

Academic year: 2021

Share "A condition number for the tensor rank decomposition"

Copied!
28
0
0

Loading.... (view fulltext now)

Full text

(1)

A condition number for

the tensor rank decomposition

Nick Vannieuwenhoven

FWO / KU Leuven

(2)

Overview

1 Introduction

2 Conditioning

3 Deriving the condition number

4 Norm-balanced condition number

(3)

Tensor rank decomposition

Hitchcock (1927) introduced the tensorrank decomposition:1

T = r X i =1 a1i ⊗ · · · ⊗ adi T = + · · · +

Therankof a tensor is the minimum number of rank-1 tensors of

which it is a linear combination.

1

(4)

Identifiability

A rank-1 tensor isuniquely determined up to scaling:

a ⊗ b ⊗ c = (αa) ⊗ (βb) ⊗ (α−1β−1c).

Kruskal (1977) proved thatthe rank-1 terms appearing in

T =

r

X

i =1

a1i ⊗ a2i ⊗ · · · ⊗ adi

(5)

Generic identifiability

It is expected2 [BCO13, COV14] that a randomreal3 orcomplex

tensor rank decomposition T =

r

X

i =1

a1i ⊗ a2i ⊗ · · · ⊗ adi,

of strictly subgeneric rank, i.e.,

r < n1n2· · · nd n1+ · · · + nd− d + 1

,

isidentifiable with probability 1, provided that it is not one of the exceptional cases where (n1, n2, . . . , nd) is

(n1, n2), or

(4, 4, 3), (4, 4, 4), (6, 6, 3), (n, n, 2, 2), (2, 2, 2, 2, 2), or n1>Qdi =2ni −

Pd

i =2(ni− 1) (unbalanced).

2[COV14] proved the conjecture when n

1n2· · · nd≤ 17500. 3

(6)

Perturbations and conditioning

Uniqueness is of central importance in applications, e.g.,

fluorescence spectroscopy, blind source separation, and parameter identification in latent variable models.

It is uncommon to work with the “true” tensor T . Usually we only

have some approximation bT . This discrepancy can originate from

many sources:

measurement errors, model errors, and

(7)

Perturbations and conditioning

A true decomposition T = r X i =1 a1i ⊗ a2i ⊗ · · · ⊗ adi

is nice, but I only know bT . I can compute an approximation

b T ≈ r X i =1 b a1ibad2 ⊗ · · · ⊗badi ≈ T

but what does it tell me about T ? Is T ’s decomposition unique?

Are the terms in bT ’s decomposition related to those of T ?

(8)

Condition number

Definition

The relative condition number of a function f : X → Y at x ∈ X is κ = lim

→0k∆xkmaxβ≤

kf (x) − f (x + ∆x)kα/kf (x )kα

k∆xkβ/kx kβ

, for some norms k · kα and k · kβ.

• x • y • f (x ) • f (y )  κ

(9)

Example

Let A =a1 a2 a3 a4, B = b1 b2 b3 b4, and

C=c + c1 c + c2 c + c3 c + c4 .

be 7 × 4 matrices with ai, bi, and ci random vectors.

Consider a sequence of tensors

T= 4 X i =1 ai ⊗ bi ⊗(c + ci) →0 −−→  4 X i =1 ai ⊗ bi  ⊗c. Then, T is 4-identifiable if  6= 0, while

(10)

Example

Let us compute the unique decomposition of T in Tensorlab using

an algebraic algorithm [dL06]: T_eps = cpdgen({A,B,C_eps}); [U, out] = cpd_gevd(T_eps,4);

Performance measures:

Relative backward error: k bT− T kF/kT kF.

Squared relative forward error:

kbA − Ak2F + k bB − Bk2F + k bC− Ck2F

kAk2

F + kBk2F + kCk2F

. after “fixing” the scaling and permutation indeterminacies.

(11)
(12)

Rough derivation: Linear approximation

Let f be the usual (overparameterized) tensor computation function: f : (Fn1× · · · × Fnd)×r → Fn1···nd (a11, . . . , ad1), . . . , (a1r, . . . , adr) 7→ r X i =1 a1i ⊗ · · · ⊗ adi.

By definition of differentiability, we can write f (x + ∆) = f (x) + J∆ + O(k∆kkr (∆)k) with lim

∆→0kr (∆)k → 0,

(13)

Rough derivation: Terracini’s Jacobian

For every rank-1 tensor a1i ⊗ a2

i ⊗ · · · ⊗ adi ∈ Fn1n2 ···nd,

we define the matrix Ti =In1⊗ a

2

i ⊗ · · · ⊗ adi · · · a1i ⊗ · · · ⊗ ad −1i ⊗ Ind .

Then, the Jacobian of f at x is given by J =T1 T2 · · · Tr ;

I call itTerracini’s matrix.4

4

(14)

Rough derivation: Bounding the condition number

Continuing from f (x + ∆) − f (x) = J∆ + O(k∆kkr (∆)k) J+(f (x + ∆) − f (x)) = ∆ + O(k∆kkr (∆)k), we find kJ+k2 ≥ k∆k 1 + O(kr (∆)k) kf (x + ∆) − f (x)k

where J+ is a left inverse of J. Hence,

kJ+k

2 ≥ κ = lim

→0k∆T k∈Gmax 

k∆k k∆T k

(15)

Rough derivation: Terracini’s matrix is not of full rank

The image of Terracini’s matrix is contained in the tangent space to the smallest (semi-)algebraic set enclosing the tensors of (real) complex rank equal to r . At smooth points they coincide.

The rank of the n1· · · nd × r (n1+ · · · + nd) Jacobian matrix J is

at most r (n1+ n2+ · · · + nd− d + 1).

(16)

Rough derivation: A bumpy road

Some issues:

1 Singular locus of r -secant semialgebraic set obstructs simple

interpretation.

,→ Put assumption of robust r -identifiability.

2 Quotient of parameter space P = (Fn1× · · · × Fnd)×r with

equivalence relation ∼ is not a metric space because the orbits of ∼ are not closed. Natural manifold-based framework of [BC13] is eliminated.

,→ Measure distances by a premetric (no symmetry and no

triangle inequality).

,→ Prove continuity of inverse of f in this premetric.

,→ Bound forward error by operator norm of J†.

(17)

The norm-balanced condition number

Theorem (—, 2016)

Let J be Terracini’s matrix associated with the rank-1 tensors a1i ⊗ · · · ⊗ ad i ∈ Fn1 ···nd. Let N = r (n 1+ · · · + nd− d + 1). If rank(J) = N, then κA = kJ†k2

is an absolute condition number of the rank decomposition

problem at T =Pr

(18)

Distance measure

(19)

Elementary properties

The relative condition number is scale-invariant: κ(T ) = κ(αT ). The condition number is orthogonally invariant.

(20)

The case of weak 3-orthogonal tensors (—, 2016)

Let αi ∈ R+ be sorted as α1≥ α2 ≥ · · · ≥ αr > 0, and let

T =

r

X

i =1

αiv1i ⊗ · · · ⊗ vdi with kvkik = 1

be a robustly r -identifiable weak 3-orthogonal tensor: ∀i < j : ∃1 ≤ k1< k2< k3≤ d : hvki1, v k1 j i = hv k2 i , v k2 j i = hv k3 i , v k3 j i = 0,

where h·, ·i is the Euclidean inner product. Then,

κ = α−1+1/dr v u u t r X i =1 α2i ,vu u t r X i =1 d α2/di . If α1 = · · · = αr, then κ = √ d−1.

(21)

Ill-posedness and ill-conditioning

The classic example from [dSL08] is the rank-3 tensor a ⊗ b ⊗ z + a ⊗ y ⊗ c + x ⊗ b ⊗ c, which is a limit of identifiable rank-2 tensors:

lim →0 1 a ⊗ b ⊗ c − 1 (a + x) ⊗ (b + y) ⊗ (c + z)  .

Experiments suggest that as you move towards an open part of the boundary of the r -secant variety of a Segre variety, the relative condition number becomes unbounded.

(22)
(23)

Conclusions

Take-away messages:

Tensors are conjectured to be identifiable. Forward errors matter.

The condition number multiplied with the backward error bounds the forward error to first order.

The condition number of a decomposition can be computed practically.

(24)
(25)

References

Main reference:

Vannieuwenhoven, A condition number for the tensor rank decomposition, arXiv:1604.00052, 2016.

Software and algorithms:

[dL06] De Lathauwer, A link between the canonical decomposition in multilinear algebra and simultaneous matrix diagonalization, SIAM J. Matrix Anal., 2006.

Sorber, Van Barel and De Lathauwer, Tensorlab v3.0, www.tensorlab.net.

(26)

References

Introduction

[H1927] Hitchcock, The expression of a tensor or a polyadic as a sum of products, J. Math. Phys., 1927.

[K1977] Kruskal, Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics, Lin. Alg. Appl., 1977.

Conditioning

[BC13] B¨urgisser and Cucker, Condition: The Geometry of

Numerical Algorithms, Springer, 2013.

[dSL08] de Silva and Lim, Tensor rank and the ill-posedness of the best low-rank approximation problem, SIAM J. Matrix Anal. Appl., 2008.

[V16] Vannieuwenhoven, A condition number for the tensor rank decomposition, arXiv:1604.00052.

(27)

References

Generic identifiability

[BCO13] Bocci, Chiantini, and Ottaviani, Refined methods for the identifiability of tensors, Ann. Mat. Pura Appl., 2013.

[CO12] Chiantini and Ottaviani, On generic identifiability of 3-tensors of small rank, SIAM J. Matrix Anal. Appl., 2013. [COV14] Chiantini, Ottaviani, and Vannieuwenhoven, An algorithm for generic and low-rank specific identifiability of complex tensors, SIAM J. Matrix Anal. Appl., 2014.

[DdL15] Domanov and De Lathauwer, Generic uniqueness

conditions for the canonical polyadic decomposition and INDSCAL, SIAM J. Matrix Anal. Appl., 2015.

[HOOS15] Hauenstein, Oeding, Ottaviani, and Sommese, Homotopy techniques for tensor decomposition and perfect identifiability, arXiv, 2015.

(28)

References

Generic identifiability for symmetric tensors

[COV16] Chiantini, Ottaviani, and Vannieuwenhoven, On generic identifiability of symmetric tensors of subgeneric rank, Trans. Amer. Math. Soc., 2016. (Accepted)

[GM16] Galuppi and Mella, Identifiability of homogeneous polynomials and Cremona Transformations, arXiv, 2016.

References

Related documents

In this part, we analyze the optimal solution of the proposed system under the following three assumptions, which will provide an optimal bound to evaluate future solutions able

The DNA templated metal sulfide nanowire gas sensor showed a generally fast response and recovery time especially for room temperature gas sensing even though less than

Chapter 3 starts with a basic exposition of the technical issues that a Bayesian econometrician faces in terms of modeling and inference when she is interested in forecasting US

6) Clear discussion chapter setting out the main findings of the dissertation linking your literature reviews with the research findings so that a clear theme can be identified

In view of these issues, the following objectives are identified in the present study: verification of the nurs- ing workload required by cancer patients undergoing palliative

Superconducting magnet costs (M$) versus induction times field volume (T-m -3 ) for solenoid magnets (closed circles), dipole and Quadruple magnets (open squares) and toroid

Factors that could cause MPLX's or MWE's actual results to differ materially from those implied in the forward-looking statements include: risk that the synergies from the