## Fundamental Conditions for Low-CP-Rank Tensor Completion

Morteza Ashraphijuo ASHRAPHIJUO@EE.COLUMBIA.EDU

Columbia University New York, NY 10027, USA

Xiaodong Wang WANGX@EE.COLUMBIA.EDU

Columbia University New York, NY 10027, USA

Editor:Animashree Anandkumar

Abstract

We consider the problem of low canonical polyadic (CP) rank tensor completion. A completion is a tensor whose entries agree with the observed entries and its rank matches the given CP rank. We analyze the manifold structure corresponding to the tensors with the given rank and define a set of polynomials based on the sampling pattern and CP decomposition. Then, we show that fi-nite completability of the sampled tensor is equivalent to having a certain number of algebraically independent polynomials among the defined polynomials. Our proposed approach results in char-acterizing the maximum number of algebraically independent polynomials in terms of a simple geometric structure of the sampling pattern, and therefore we obtain the deterministic necessary and sufficient condition on the sampling pattern for finite completability of the sampled tensor. Moreover, assuming that the entries of the tensor are sampled independently with probabilitypand using the mentioned deterministic analysis, we propose a combinatorial method to derive a lower bound on the sampling probabilityp, or equivalently, the number of sampled entries that guaran-tees finite completability with high probability. We also show that the existing result for the matrix completion problem can be used to obtain a loose lower bound on the sampling probabilityp. In addition, we obtain deterministic and probabilistic conditions for unique completability. It is seen that the number of samples required for finite or unique completability obtained by the proposed analysis on the CP manifold is orders-of-magnitude lower than that is obtained by the existing analysis on the Grassmannian manifold.

Keywords: Low-rank tensor completion, canonical polyadic (CP) decomposition, finite com-pletability, unique comcom-pletability, algebraic geometry, Bernstein’s theorem.

1. Introduction

The fast progress in data science and technology has given rise to the extensive applications of multi-way datasets, which allow us to take advantage of the inherent correlations across different attributes. The classical matrix analysis limits its efficiency in exploiting the correlations across different features from a multi-way perspective. In contrast, analysis of multi-way data (tensor), which was originally proposed in the field of psychometrics and recently found applications in machine learning and signal processing, is capable of taking full advantage of these correlations (De Lathauwer, 2009; Kolda and Bader, 2009; Abraham et al., 2012; Grasedyck et al., 2013; Muti and Bourennane, 2007; Signoretto et al., 2011). The problem of low-rank tensor completion, i.e., reconstructing a tensor from a subset of its entries given the rank, which is generally NP hard (Hillar and Lim, 2013), arises in compressed sensing (Lim and Comon, 2010; Sidiropoulos and

c

Kyrillidis, 2012; Gandy et al., 2011), visual data reconstruction (Liu et al., 2013, 2016), seismic data processing (Kreimer et al., 2013; Ely et al., 2013), etc. Existing approaches to low-rank data completion mainly focus on convex relaxation of matrix rank (Cand`es and Recht, 2009; Cand`es and Tao, 2010; Cai et al., 2010; Ashraphijuo et al., 2016b; Cand`es et al., 2013; Ashraphijuo et al., 2015) or different convex relaxations of tensor ranks (Gandy et al., 2011; Tomioka et al., 2010; Signoretto et al., 2014; Romera-Paredes and Pontil, 2013). In addition, there are some other works which provide a completion of the given rank using alternating minimization, e.g., for low-CP-rank tensor (Jain and Oh, 2014) and low-TT-rank tensor (Wang et al., 2016).

Tensors consisting of real-world datasets usually have a low rank structure. The manifold of low-rank tensors has recently been investigated in several works (Grasedyck et al., 2013; Ashraphi-juo et al., 2016a; AshraphiAshraphi-juo and Wang, 2017). In this paper, we focus on the canonical polyadic (CP) decomposition (Harshman, 1970; ten Berge and Sidiropoulos, 2002; Carroll and Chang, 1970; Stegeman and Sidiropoulos, 2007) and the corresponding CP rank, but in general there are other well-known tensor decompositions including Tucker decomposition (Kolda, 2001; Grasedyck, 2010; Kressner et al., 2014), tensor-train (TT) decomposition (Oseledets, 2011; Holtz et al., 2012), tubal rank decomposition (Kilmer et al., 2013) and several other methods (De Lathauwer et al., 2000; Papalexakis et al., 2012). Note that most existing literature on tensor completion based on various optimization formulations use CP rank (Gandy et al., 2011; Krishnamurthy and Singh, 2013).

Deterministic conditions on the locations of the sampled entries (sampling pattern) are obtained through algebraic geometry analyses on Grassmannian manifold that lead to finite/unique solutions to the matrix completion problem (Pimentel-Alarc´on et al., 2016d). Also, deterministic conditions on the sampling patterns have been studied for subspace clustering in (Pimentel-Alarc´on et al., 2016c, 2015, 2016a,b). In this paper, we study the fundamental conditions on the sampling pat-tern to ensure finite or unique number of completions, where these fundamental conditions are independent of the correlations of the entries of the tensor, in contrast to the common assumption adopted in literature such as incoherence. Given the rank of a matrix, Pimentel-Alarc´onet. al. in (Pimentel-Alarc´on et al., 2016d) obtains such fundamental conditions on the sampling pattern for finite completability of the matrix. Previously, we treated the same problem for multi-view matrix (Ashraphijuo et al., 2017c,b), tensor given its Tucker rank (Ashraphijuo et al., 2016a, 2017a), and tensor given its TT rank(Ashraphijuo and Wang, 2017). In this paper, the structure of the CP decom-position and the geometry of the corresponding manifold are investigated to obtain the fundamental conditions for finite completability given its CP rank. These recently developed algebraic geometry analyses can also be used to approximate the rank of partially sampled data (Ashraphijuo et al., 2017d).

To emphasize the contribution of this work, we highlight the differences and challenges in com-parison with the Tucker and TT tensor models. In CP decomposition, the notion of tensor multi-plication is different from those for Tucker and TT, and therefore the geometry of the manifold and the algebraic variety are completely different. Moreover, in CP decomposition we are dealing with the sum of several tensor products, which is not the case in Tucker and TT decompositions, and therefore the equivalence classes or geometric patterns that are needed to study the algebraic variety are different. Moreover, CP rank is a scalar and the ranks of matricizations and unfoldings are not given in contrast with the Tucker and TT models.

Let U denote the sampled tensor and Ωdenote the binary sampling pattern tensor that is of the same dimension and size asU. The entries ofΩthat correspond to the observed entries ofU

independently with probabilityp. This paper is mainly concerned with treating the following three problems.

Problem (i): Given the CP rank, characterize the necessary and sufficient conditions on the sampling patternΩ, under which there exist only finitely many completions ofU.

We consider the CP decomposition of the sampled tensor, where all rank-1tensors in this de-composition are unknown and we only have some entries ofU. Then, each sampled entry results in a polynomial such that the variables of the polynomial are the entries of the rank-1tensors in the CP decomposition. We propose a novel analysis on the CP manifold to obtain the maximum num-ber of algebraically independent polynomials, among all polynomials corresponding to the sampled entries, in terms of the geometric structure of the sampling pattern Ω. We show that if the maxi-mum number of algebraically independent polynomials is a given number, then the sampled tensor

U is finitely completable. Due to the fundamental differences between the CP decomposition and the Tucker or TT decomposition, this analysis is completely different from our previous works (Ashraphijuo et al., 2016a; Ashraphijuo and Wang, 2017). Moreover, note that our proposed alge-braic geometry analysis on the CP manifold is not a simple generalization of the existing analysis on the Grassmannian manifold (Pimentel-Alarc´on et al., 2016d) even though the CP decomposition is a generalization of rank factorization of a matrix, as almost every step needs to be developed anew.

Problem (ii): Characterize conditions on the sampling pattern to ensure that there is exactly one completion for the given CP rank.

Similar to Problem (i), our approach is to study the algebraic independence of the polynomials corresponding to the samples. We exploit the properties of a set of minimally algebraically depen-dent polynomials to add additional constraints on the sampling pattern such that each of the rank-1

tensors in the CP decomposition can be determined uniquely.

As we will observe later, the deterministic conditions for finite or unique completability are combinatorial in nature, which are hard to verify if they hold true in practice. Therefore, we also provide a probabilistic analysis to verify the validity of such conditions based on the number of samples or the sampling probability. However, we cannot guarantee the validity of the constraints with probability one anymore (i.e., not deterministically anymore) and instead we show that the combinatorial conditions hold true with high probability if the sampling probability is above certain threshold.

Problem (iii): Provide a lower bound on the total number of sampled entries or the sampling probability p such that the proposed conditions on the sampling pattern Ω for finite and unique completability are satisfied with high probability.

We develop several combinatorial tools together with our previous graph theory results in (Ashraphi-juo et al., 2016a) to obtain lower bounds on the total number of sampled entries, i.e., lower bounds on the sampling probabilityp, such that the deterministic conditions for Problems (i) and (ii) are met with high probability. Particularly, it is shown in (Krishnamurthy and Singh, 2013),O(nrd−21d2log(r))

samples are required to recover the tensorU ∈_{R}

d

z }| {

n×. . .×n_{of rank}_{r}_{. Recall that in this paper, we}

obtain the number samples to ensure finite/unique completability that is independent of the comple-tion algorithm. As we show later, using the existing analysis on the Grassmannian manifold results inO(nd+12 max{dlog(n) + log(r), r})samples to ensure finite/unique completability. However,

(Krish-namurthy and Singh, 2013). Hence, the fundamental conditions for tensor completion motivate new optimization formulation to close the gap in the number of required samples.

Tucker decomposition consists of a core tensor multiplied by a matrix along each dimension. TT decomposition of ad-way tensor consists of the train-wise multiplication of a matrix andd−2 three-way tensors and finally another matrix. Tucker and TT ranks are defined as the vectors consisting of rank of matricizations and rank of unfoldings, respectively. The structure of the manifold of the tensors with a fixed Tucker (or TT) rank vector is fundamentally different with that of the CP manifold.

Note that the general framework of using rank factorization to study the algebraic independence of the polynomials through counting the number of involved variables in the polynomials is indeed similar to that in (Pimentel-Alarc´on et al., 2016d; Ashraphijuo et al., 2016a; Ashraphijuo and Wang, 2017) to some extent. However, since the manifold considered in this paper (CP manifold) is fun-damentally different from the Grassmannian or Tucker or TT manifold, all results in this paper are original. In particular, we mention some of the main differences: (i) geometry of the manifold, (ii) the equivalence class for the basis and consequently (iii) the canonical basis, (iv) the structure of the polynomials, etc., are fundamentally different from those in the literature. Hence, the determinis-tic analyses are fundamentally different (excluding Lemma 7). As a consequence, the probabilisdeterminis-tic analysis is also basically different as the combinatorial condition (6) is obtained based on the defined geometry and equivalence class for CP decomposition. However, some lemmas including the gen-eralization of Hall’s theorem for bipartite graphs (Lemma 18) or Lemma 17 that applies pigeonhole principle are taken from our previous papers (Ashraphijuo et al., 2016a; Ashraphijuo and Wang, 2017) in probabilistic analysis.

The remainder of this paper is organized as follows. In Section 2, the preliminaries and problem statement are presented. In Section 3, we develop necessary and sufficient deterministic conditions for finite completability. In Section 4, we develop probabilistic conditions for finite completability. In Section 5, we consider unique completability and obtain both deterministic and probabilistic conditions. Some numerical results are provided in Section 6. Finally, Section 7 concludes the paper.

2. Background

2.1 Preliminaries and Notations

In this paper, it is assumed that ad-way tensorU ∈_{R}n1×···×nd _{is sampled. Throughout this paper,}

we use CP rank as the rank of a tensor, which is defined as the minimum numberr such that there
existal_{i} ∈Rni_{for}_{1}_{≤}_{i}_{≤}_{d}_{and}_{1}_{≤}_{l}_{≤}_{r}_{and}

U =

r

X

l=1

al_{1}⊗al_{2}⊗. . .⊗al_{d}, (1)

or equivalently,

U(x1, x2, . . . , xd) = r

X

l=1

where⊗denotes the tensor product (outer product) andU(x1, x2, . . . , xd)denotes the entry of the

sampled tensor with coordinates~x = (x1, x2, . . . , xd)andali(xi)denotes thexi-th entry of vector

al_{i}. Note thatal_{1}⊗al_{2}⊗. . .⊗a_{d}l ∈_{R}n1×···×nd_{is a rank-}_{1}_{tensor,}_{l}_{= 1}_{,}_{2}_{, . . . , r}_{.}

Denote Ωas the binary sampling pattern tensor that is of the same size asU andΩ(~x) = 1if

U(~x)is observed andΩ(~x) = 0otherwise. Also definex+,max{0, x}.

For a nonemptyI ⊂ {1, . . . , d}, defineNI ,Πi∈Ini and also denoteI¯,{1, . . . , d}\I. Let

e

U(I)∈RNI×NI¯be the unfolding of the tensorU corresponding to the setI such thatU(~x) =

e

U(I)(Mf_{I}(x_{i}_{1}, . . . , x_{i}_{|}

I|),MfI¯(xi|I|+1, . . . , xid)), where I = {i1, . . . , i|I|}, I¯ = {i|I|+1, . . . , id},

f

MI : (xi1, . . . , xi|I|) → {1,2, . . . , NI} andMfI¯ : (xi|I|+1, . . . , xid) → {1,2, . . . ,N¯I¯} are two

bijective mappings. Fori∈ {1, . . . , d}andI = {i}, we denote the unfolding corresponding toI

byU(i)and we call it thei-th matricization of tensorU.

2.2 Problem Statement and A Motivating Example

We are interested in finding necessary and sufficient deterministic conditions on the sampling pat-tern tensorΩunder which there are infinite, finite, or unique completions of the sampled tensorU

that satisfy the given CP rank constraint. Furthermore, we are interested in finding probabilistic con-ditions on the number of samples or the sampling probability that ensure the obtained deterministic conditions for finite and unique completability hold with high probability.

To motivate our proposed analysis in this paper on the CP manifold, we compare the following two approaches using a simple example to emphasize the exigency of our proposed analysis: (i) analyzing each of the unfoldings individually, (ii) analyzing based on the CP decomposition.

Consider a three-way tensorU ∈R2×2×2with CP rank of1. Assume that the entries(1,1,1), (2,1,1),(1,2,1)and(1,1,2)are observed. As a result of Lemma 11 in this paper, all unfoldings of this tensor are rank-1matrices. It is shown in Section II of (Ashraphijuo et al., 2016a) that having any 4 entries of a rank-1 2×4 matrix, there are infinitely many completions for it. As a result, any unfolding of U is infinitely many completable given only the corresponding rank constraint. Next, using the CP decomposition (1), we show that there are only finitely many completions of the sampled tensor of CP rank1.

Definea1

1 = [x x0]>∈R2,a12 = [y y0]> ∈R2 anda13 = [z z0]>∈R2. Then, according to (1), we have the following

U(1,1,1) =xyz, U(2,2,1) =x0y0z, (3)

U(2,1,1) =x0yz, U(2,1,2) =x0yz0,

U(1,2,1) =xy0z, U(1,2,2) =xy0z0,

Recall that (1,1,1),(2,1,1),(1,2,1),and (1,1,2) are the observed entries. Hence, the un-known entries can be determined uniquely in terms of the4observed entries as

U(2,2,1) = x0y0z= U(2,1,1)U(1,2,1)

U(1,1,1) , (4)

U(2,1,2) = x0yz0= U(2,1,1)U(1,1,2)

U(1,1,1) ,

U(1,2,2) = xy0z0= U(1,2,1)U(1,1,2)

U(1,1,1) ,

U(2,2,2) = x0y0z0 = U(2,1,1)U(1,2,1)U(1,1,2)

U(1,1,1)U(1,1,1) .

Therefore, based on the CP decomposition, the sampled tensor U is finitely (uniquely) many completable. Hence, this example illustrates that collapsing a tensor into a matrix results in loss of information and thus motivate the investigation of the tensor CP manifold.

3. Deterministic Conditions for Finite Completability

In this section, we characterize the necessary and sufficient condition on the sampling pattern for finite completability of the sampled tensor given its CP rank. In Section 3.1, we define a poly-nomial based on each observed entry and through studying the geometry of the manifold of the corresponding CP rank, we transform the problem of finite completability ofU to the problem of including enough number of algebraically independent polynomials among the defined polynomials for the observed entries. In Section 3.2, a binary tensor is constructed based on the sampling pat-ternΩ, which allows us to study the algebraic independence of a subset of polynomials among all defined polynomials based on the samples. In Section 3.3, we characterize the connection between the maximum number of algebraically independent polynomials among all the defined polynomials and finite completability of the sampled tensor.

3.1 Geometry

Suppose that the sampled tensorUis chosen generically from the manifold of the tensors inRn1×···×nd

of rankr. Assume thatal_{i} vectors are unknown for1 ≤i≤dand1≤ l≤r. For notational
sim-plicity, define the tuplesAl = (al1,al2, . . . ,ald)forl = 1, . . . , randA = (A1, . . . ,Ar). Moreover,

defineAi = [a1i|a2i|. . .|ari]∈Rni×r. Note that each of the sampled entries results in a polynomials

in terms of the entries ofAas in (2).

Here, we briefly mention the following two facts to highlight the fundamentals of our proposed analysis.

• Fact 1: As it can be seen from (2), any observed entry U(~x) results in an equation that
involves one entry ofal_{i}, i = 1, . . . , d andl = 1, . . . , r. Considering the entries of Aas

variables (right-hand side of (2)), each observed entry results in a polynomial in terms of these variables. Moreover, for any observed entryU(~x), the value ofxispecifies the location of the

entry ofal_{i} that is involved in the corresponding polynomial,i= 1, . . . , dandl= 1, . . . , r.

algebraically independent with probability one, and therefore there exist only finitely many solutions.

The following assumption will be used frequently in this paper.

Assumption1: Each row of thed-th matricization of the sampled tensor, i.e.,U(d)includes at leastrobserved entries.

Observe that each row of thed-th matricization of the sampled tensor includesn1×n2×. . .×

nd−1 entries. In order to show that Assumption1is a very mild assumption, as an example assume thatn1 = n2 = · · · = nd−1 = n. Then, each row ofU(d) has nd−1 entries and Assumption 1 requiresrsampled entries among them. As we consider the low-rank scenario, we show in Section 4 how realistic this assumption is in terms of the sampling probability.

Lemma 1 GivenAi’s fori= 1, . . . , d−1and Assumption1,Adcan be determined uniquely.

Proof Each row ofAd hasr entries and also as it can be seen from (2), each observed entry in

thei-th row ofU(d) results in a degree-1 polynomial in terms of the r entries of the i-th row of

Ad. Since Assumption1holds, for each row ofAdthat hasrvariables, we have at leastrdegree-1

polynomials. Genericity of the coefficients of these polynomials results that each row ofAdcan be

determined uniquely.

As a result of Lemma 1, we can obtainAdin terms of the entries ofAi’s fori= 1, . . . , d−1. As

mentioned earlier, each observed entry is equivalent to a polynomial in the format of (2). Consider all such polynomials excluding those that have been used to obtainAd (r samples at each row of

U(d)) and denote this set of polynomials in terms of the entries ofAi’s for i = 1, . . . , d−1 by

P(Ω).

We are interested in defining an equivalence class such that each class includes exactly one of the decompositions among all rank-r decompositions of a particular tensor and the pattern in Lemma 3 characterizes such an equivalence class. Lemma 2 below is a re-statement of Lemma1

in (Ashraphijuo and Wang, 2017), which characterizes such an equivalence class or equivalently geometric pattern for a matrix instead of tensor. This lemma will be used to show Lemma 3 later.

Lemma 2 Let X denote a generically chosen matrix from the manifold of n1 ×n2 matrices of

rankr and alsoQ ∈_{R}r×r_{be a given full rank matrix. Then, there exists a unique decomposition}

X=YZsuch thatY∈_{R}n1×r_{,}_{Z}_{∈}

Rr×n2 andP=Q, whereP∈Rr×rrepresents a submatrix1

ofY.

In Lemma 3, we generalize Lemma 2 and characterize the similar pattern for a multi-way tensor. Assuming thatPrepresents the submatrix ofYconsists of the firstrcolumns and the firstrrows of

Yand alsoQis equal to ther×ridentity matrix, this pattern is called the canonical decomposition ofX. The canonical decomposition is shown for a rank-2matrix as the following

1 −1 0 −1

2 2 4 6

−1 3 2 5

1 2 3 5

=

1 0

0 1

y1 y2

y3 y4

× x1 x2 x3 x4 x5 x6 x7 x8 ,

wherexi’s andyi’s can be determined uniquely as

y1 y2

y3 y4

= −2

1 2

−1 2

3 4

and x1 x2 x3 x4

x5 x6 x7 x8

= 1 −1 0 −1

2 2 4 6 .

Also, the above canonical decomposition can be written as the following

1 −1 0 −1

2 2 4 6

−1 3 2 5

1 2 3 5

= 1 0

y1

y3

× x1 x2 x3 x4 +

0 1

y2

y4

× x5 x6 x7 x8

.

Generalization of the canonical decomposition for multi-way tensor is as the following

a1_{1} =

1 0

.. .

0

a1_{1}(r+ 1)

.. .

a1_{1}(n1)

, . . . , ar_{1} =

0 0

.. .

1

ar_{1}(r+ 1)

.. .

ar_{1}(n1)
,

and fori∈ {2, . . . , d−1}

a1_{i} =
1

a1_{i}(2)

.. .

a1

i(ni)

, . . . , ar_{i} =
1

ar_{i}(2)

.. .

ar i(ni)

.

Lemma 3 Letj ∈ {1, . . . , d−1}be a fixed number and defineJ ={1, . . . , d−1}\{j}. Assume that the full rank matrixQj ∈ Rr×r and matricesQi ∈ R1×r with nonzero entries fori∈ J are

given. Also, letPidenote an arbitrary submatrix ofAi,i= 1,2, . . . , d−1, wherePj ∈Rr×rand

Pi ∈R1×rfori∈J. Then, with probability one, there exists exactly one rank-rdecomposition of

U such thatPi =Qi,i= 1, . . . , d−1.

Proof First we claim that there exists at most one rank-r decomposition ofU such thatPi = Qi,

i= 1, . . . , d−1. We assume thatPi =Qi,i= 1, . . . , d−1and alsoU is given. Then, it suffices

to show that the rest of the entries ofAcan be determined in at most a unique way (no more than

one solution) in terms of the given parameters such that (1) holds. Note that if a variable can be determined uniquely through two different ways (two sets of samples or equations), in general either it can be determined uniquely if both ways result in the same value or it does not have any solution otherwise. Letyidenote the row number of submatrixPi ∈R1×rfori∈J andYj ={yj1, . . . , yjr}

denote the row numbers of submatrixPj ∈Rr×r.

As the first step of proving our claim, we show thatAdcan be determined uniquely. Consider the

subtensorU0 = U(y1, . . . , yj−1, Yj, yj+1, . . . , yd−1,:) ∈ R j−1 z }| {

1×. . .×1×r×

d−j−1 z }| {

includesrndentries. Having CP decomposition (2), each entry ofU0results in one degree-1

poly-nomial in terms of the entries ofAdwith coefficients in terms of the entries ofQi’s. Let the matrix

U0 ∈ _{R}r×nd _{represent the}_{rn}

d entries ofU0. Moreover, define C = [c1|. . .|cr] ∈ Rr×r where

cl = (Πi∈JPi(1, l))qlj ∈Rr×1forl= 1, . . . , randqlj ∈Rr×1is thel-th column ofQj.

Observe that CP decomposition (2) for the entries ofU0 can be written asU0 =CA>_{d}. Recall
thatQj is full rank, and thereforeqlj’s are linearly independent,l = 1, . . . , r. Also, a system of

equations with at leastmlinearly independent degree-1polynomials inmvariables does not have more than one solution. Hence,cl’s are also linearly independent forl= 1, . . . , r, and thereforeC

is full rank. As a result,Adcan be determined uniquely. In the second step, similar to the first step,

we can show that the rest ofAi’s have at most one solution having one entry ofAdwhich has been

already obtained.

Finally, we also claim that there exists at least one rank-rdecomposition ofU such that Pi =

Qi, i = 1, . . . , d−1. We show this by induction ond. Ford = 2, this is a result of Lemma 2.

Induction hypothesis states that the claim holds ford=k−1and we need to show that it also holds ford = k. Since by merging dimension k−1andkfor each of the rank-1 tensors of the corre-sponding CP decomposition and using induction hypothesis this step reduces to showing a rank-1

matrix can be decomposed to two vectors such that one component of one of them is given which is again a special case of Lemma 2 for rank-1scenario.

Assume thatS denotes the set of all possibleAi’s fori= 1, . . . , d−1givenAdwithout any

polynomial constraint. Lemma 3 results in a pattern that characterizes exactly one rank-r decompo-sition among all rank-rdecompositions, and therefore the dimension ofS is equal to the number of unknowns, i.e., number of entries ofAi’s fori= 1, . . . , d−1excluding those that are involved in

the patternPi’s in Lemma 3 which isr(Pd_{i}_{=1}−1ni)−r2−r(d−2).

Lemma 4 For almost everyU, the sampled tensor is finitely completable if and only if the maximum number of algebraically independent polynomials inP(Ω)is equal tor(Pd−1

i=1 ni)−r2−r(d−2).

Proof The proof is omitted due to the similarity to the proof of Lemma2 in (Ashraphijuo et al., 2016a) with the only difference that here the dimension isr(Pd−1

i=1 ni)−r2−r(d−2)instead of

Πj_{i}_{=1}ni Πdi=j+1ri

−Pd

i=j+1ri2

which is the dimension of the core in Tucker decomposi-tion.

3.2 Constraint Tensor

In this section, we provide a procedure to construct a binary tensorΩ˘ based onΩsuch thatP( ˘Ω) =

P(Ω) and each polynomial can be represented by one d-way subtensor of Ω˘ which belongs to

Rn1×n2×···×nd−1×1. UsingΩ˘, we are able to recognize the observed entries that have been used to

obtain theAdin terms of the entries ofA1, . . . ,Ad−1, and we can study the algebraic independence

of the polynomials inP(Ω)which is directly related to finite completability through Lemma 4.
For each subtensorYof the sampled tensorU, letNΩ(Y)denote the number of sampled entries
inY. Specifically, consider any subtensorY ∈ _{R}n1×n2×···×nd−1×1 _{of the tensor}U_{. Then, since}_{r}

The sampled tensorU includesndsubtensors that belong toRn1×n2×···×nd−1×1 _{and let}Y

i for

1 ≤ i≤ nddenote thesendsubtensors. Define a binary valued tensorY˘i ∈ Rn1×n2×···×nd−1×ki_{,}

whereki = NΩ(Yi)−r and its entries are described as the following. We can look at Y˘i aski

tensors each belongs toRn1×n2×···×nd−1×1_{. For each of the mentioned}_{k}

i tensors inY˘iwe set the

entries corresponding to therobserved entries that are used to obtain Adequal to1. For each of

the otherkiobserved entries that have not been used to obtainAd, we pick one of thekitensors of

˘

Y_{i}and set its corresponding entry (the same location as that specific observed entry) equal to1and
set the rest of the entries equal to0. In the case thatki= 0we simply ignoreY˘i, i.e.,Y˘i=∅

By putting together all nd tensors in dimension d, we construct a binary valued tensorΩ˘ ∈

Rn1×n2×···×nd−1×K_{, where}_{K} _{=}Pnd

i=1ki =NΩ(U)−rndand call it theconstraint tensor.

Ob-serve that each subtensor ofΩ˘ which belongs toRn1×n2×···×nd−1×1includes exactlyr+ 1nonzero

entries. In the following we show this procedure for an example.

Example 1 Consider an example in which d = 3 andr = 2 and U ∈ _{R}3×3×3_{. Assume that}

Ω(x, y, z) = 1if(x, y, z)∈ SandΩ(x, y, z) = 0otherwise, where

S ={(1,1,1),(1,2,1),(2,3,1),(3,3,1),(1,1,2),(2,1,2),(3,2,2),(1,3,3),(3,2,3)},

represents the set of observed entries. Hence, observed entries(1,1,1),(1,2,1),(2,3,1),(3,3,1)

belong toY_{1}, observed entries(1,1,2),(2,1,2),(3,2,2)belong toY_{2}, and observed entries(1,3,3),

(3,2,3)belong toY3. As a result,k1 = 4−2 = 2,k2 = 3−2 = 1, andk3 = 2−2 = 0. Hence,

˘

Y1 ∈R3×3×2,Y˘2 ∈R3×3×1, andY˘3 =∅, and therefore the constraint tensorΩ˘ belongs toR3×3×3.

Also, assume that the entries that we use to obtainA3 in terms of the entries ofA1 andA2are

(2,3,1),(3,3,1), (1,1,2),(2,1,2), (1,3,3)and(3,2,3). Note that Y˘_{1}(2,3,1) = ˘Y_{1}(2,3,2) =
˘

Y1(3,3,1) = ˘Y1(3,3,2) = 1 (correspond to entries of Y1 that have been used to obtain A3),
and also for the two other observed entries we have Y˘_{1}(1,1,1) = 1(correspond to U(1,1,1))
andY˘_{1}(1,2,2) = 1(correspond to U(1,2,1)) and the rest of the entries of Y˘_{1} are equal to zero.
Similarly,Y˘2(1,1,1) = ˘Y2(2,1,1) = ˘Y2(3,2,1) = 1and the rest of the entries ofY˘2are equal to
zero.

Then,Ω(˘ x, y, z) = 1if(x, y, z)∈ S0 _{and}_{Ω(}_{˘} _{x, y, z}_{) = 0}_{otherwise, where}

˘

S ={(1,1,1),(1,2,2),(2,3,1),(2,3,2),(3,3,1),(3,3,2),(1,1,3),(2,1,3),(3,2,3)}.

Note that each subtensor ofΩ˘ that belongs toRn1×...×nd−1×1represents one of the polynomials

in P(Ω) besides showing the polynomials that have been used to obtainAd. More specifically,

consider a subtensor ofΩ˘ that belongs toRn1×...×nd−1×1 withr+ 1nonzero entries. Observe that

exactly r of them correspond to the observed entries that have been used to obtainAd. Hence,

this subtensor represents a polynomial after replacing entries ofAdby the expressions in terms of

entries ofA1, . . . ,Ad−1, i.e., a polynomial inP(Ω).

3.3 Algebraic Independence

In this section, we obtain the maximum number of algebraically independent polynomials inP( ˘Ω)

According to Lemma 3, as we consider one particular equivalence class some of the entries ofAi’s are known, i.e., P1, . . . ,Pd−1 in the statement of the lemma. Therefore, in order to find the number of variables (unknown entries ofAi’s) in a set of polynomials, we should subtract the

number of known entries in the corresponding pattern from the total number of involved entries. Also, recall that the sampled tensor is chosen generically from the corresponding manifold, and therefore according to Fact2, the independency of the polynomials can be studied through studying the number of variables involved in each subset of them.

Definition 5 LetΩ˘0 ∈ Rn1×n2×···×nd−1×t _{be a subtensor of the constraint tensor}_{Ω}˘_{. Let}_{m}

i( ˘Ω0)

denote the number of nonzero rows of Ω˘0_{(}_{i}_{)}. Also, let P( ˘Ω0) denote the set of polynomials that
correspond to nonzero entries ofΩ˘0.

The following lemma gives an upper bound on the maximum number of algebraically
inde-pendent polynomials in the set P( ˘Ω0) for an arbitrary subtensor Ω˘0 ∈ _{R}n1×n2×···×nd−1×t _{of the}

constraint tensor. Note that P( ˘Ω0)includes exactlytpolynomials as each subtensor belonging to

Rn1×n2×···×nd−1×1represents one polynomial.

Lemma 6 Suppose that Assumption1holds. Consider an arbitrary subtensorΩ˘0 ∈_{R}n1×n2×···×nd−1×t
of the constraint tensor Ω˘. The maximum number of algebraically independent polynomials in
P( ˘Ω0)is at most

r

_{d}_{−}_{1}
X

i=1

mi( ˘Ω0)

!

−minnmaxnm1( ˘Ω0), . . . , md−1( ˘Ω0)

o

, ro−(d−2)

!

. (5)

Proof As a consequence of Fact2, the maximum number of algebraically independent polynomials in a subset of polynomials ofP( ˘Ω0)is at most equal to the total number of variables that are involved in the corresponding polynomials. Note that by observing the structure of (2) and Fact1, the number of entries ofAi that are involved in the polynomialsP( ˘Ω0) is equal tormi( ˘Ω0), i = 1, . . . , d−

1. Therefore, the total number of entries of A1, . . . ,Ad−1 that are involved in the polynomials

P( ˘Ω0) is equal tor

Pd−1

i=1 mi( ˘Ω0)

. However, some of the entries of A1, . . . ,Ad−1 are known and depending on the equivalence class we should subtract them from the total number of involved entries.

For a fixed numberjin Lemma 3, it is easily verified that the total number of variables (unknown

entries) ofA1, . . . ,Ad−1that are involved in the polynomialsP( ˘Ω0)is equal tor

P

i∈J

mi( ˘Ω0)−1

+

+rmj( ˘Ω0)−r

+

, with J = {1, . . . , d−1}\{j}. Note that

mi( ˘Ω0)−1

+

= mi( ˘Ω0)−1,

mj( ˘Ω0)−r

+

= mj( ˘Ω0)−min

n

mj( ˘Ω0), r

o

. However, j is not a fixed number in general. Therefore, the maximum number of known entries ofA1, . . . ,Ad−1 that are involved in the poly-nomialsP( ˘Ω0)is equal to

r

min

n

max

n

m1( ˘Ω0), . . . , md−1( ˘Ω0)

o , r

o

−(d−2)

, which results that the number of

vari-ables ofA1, . . . ,Ad−1that are involved in the polynomialsP( ˘Ω0)is equal to (5).

proper subsets are algebraically independent. The following lemma provides an important property about a set of minimally algebraically dependentP( ˘Ω0). This lemma will be used later to derive the maximum number of algebraically independent polynomials inP( ˘Ω0).

Lemma 7 Suppose that Assumption1holds. Consider an arbitrary subtensorΩ˘0 ∈Rn1×n2×···×nd−1×t of the constraint tensorΩ˘. Assume that polynomials inP( ˘Ω0) are minimally algebraically depen-dent. Then, the number of variables (unknown entries) ofA1, . . . ,Ad−1that are involved inP( ˘Ω0)

is equal tot−1.

Proof Let the set of polynomials{p1, , p2. . . , pt} representP( ˘Ω0). Then,P( ˘Ω0)\pi (all

polyno-mials inP( ˘Ω0)excludingpi) is a set of algebraically independent polynomials,i = 1, . . . , t. The

number of involved variables inP( ˘Ω0)\piis at least equal to the number of polynomials, i.e.,t−1.

Therefore, the number of involved variables inP( ˘Ω0)is at leastt−1.

By contradiction, assume that the number of variables that are involved in the set of polynomi-alsP( ˘Ω0)is at leastt. Then, we claim that for any subset of the polynomials ofP( ˘Ω0)the number of involved variables is at least equal to the number of polynomials in that subset. If the subset is equal toP( ˘Ω0), the claim is just the assumption of contradiction. If the subset is a proper subset of

P( ˘Ω0), according to the assumption in the statement of Lemma, the polynomials are algebraically independent, and therefore, the claim holds. Hence, the polynomials inP( ˘Ω0) are algebraically independent which contradicts the assumption.

Given an arbitrary subtensor Ω˘0 ∈_{R}n1×n2×···×nd−1×t_{of the constraint tensor}_{Ω}˘_{, we are }

inter-ested in obtaining the maximum number of algebraically independent polynomials inP( ˘Ω0)based on the structure of nonzero entries ofΩ˘0. The next lemma can be used to characterize this number in terms of a simple geometric structure of nonzero entries ofΩ˘0.

Lemma 8 Suppose that Assumption1holds. Consider an arbitrary subtensorΩ˘0 ∈_{R}n1×n2×···×nd−1×t
of the constraint tensorΩ˘. The polynomials inP( ˘Ω0)are algebraically independent if and only if
for anyt0 ∈ {1, . . . , t}and any subtensorΩ˘00∈_{R}n1×n2×···×nd−1×t0 _{of}_{Ω}˘0_{we have}

r

_{d}_{−}_{1}
X

i=1

mi( ˘Ω00)

! −min

n max

n

m1( ˘Ω00), . . . , md−1( ˘Ω00)

o

, ro−(d−2)

!

≥t0. (6)

Proof First, assume that all polynomials inP( ˘Ω0)are algebraically independent. Also, by
contra-diction assume that there exists a subtensorΩ˘00 ∈ _{R}n1×n2×···×nd−1×t0 _{of}_{Ω}˘0 _{that (6) does not hold}

for. Note thatP( ˘Ω00)includest0 polynomials. On the other hand, according to Lemma 6, the maxi-mum number of algebraically independent polynomials inP( ˘Ω00)is no greater than the LHS of (6), and therefore the polynomials inP( ˘Ω00)are not algebraically independent. Hence, the polynomials inP( ˘Ω0)are not algebraically independent as well.

In order to prove the other side of the statement, assume that the polynomials inP( ˘Ω0)are
alge-braically dependent. Hence, there exists a subset of the polynomials that are minimally algealge-braically
dependent and let us denote it byP( ˘Ω00), whereΩ˘00 ∈_{R}n1×n2×···×nd−1×t0 _{is a subtensor of}_{Ω}˘0_{. As}

equal the LHS of (6). Therefore, the LHS of (6) is less than or equal tot0−1or equivalently

r

_{d}_{−}_{1}
X

i=1

mi( ˘Ω00)

! −min

n

max

n

m1( ˘Ω00), . . . , md−1( ˘Ω00)

o , r

o

−(d−2)

!

< t0. (7)

Finally, the following theorem characterizes the necessary and sufficient condition on Ω˘ for finite completability of the sampled tensorU.

Theorem 9 Suppose that Assumption1holds. For almost everyU, the sampled tensorU is finitely
completable if and only if Ω˘ contains a subtensor Ω˘0 ∈ _{R}n1×n2×···×nd−1×t _{such that (i)} _{t} _{=}
r(Pd−1

i=1 ni)−r2−r(d−2)and (ii) for anyt0 ∈ {1, . . . , t}and any subtensorΩ˘00∈Rn1×n2×···×nd−1×t0 ofΩ˘0,(6)holds.

Proof Lemma 4 states that for almost everyU, there exist finitely many completions of the sam-pled tensor if and only if P( ˘Ω) includes r(Pd−1

i=1 ni) − r2 − r(d− 2) algebraically

indepen-dent polynomials. Moreover, according to Lemma 8, polynomials corresponding to a subtensor

˘

Ω0 ∈_{R}n1×n2×···×nd−1×t_{of the constraint tensor are algebraically independent if and only if }

condi-tion (ii) of the statement of the Theorem holds. Therefore, for almost everyU, conditions (i) and (ii) hold if and only if the sampled tensorU is finitely completable.

Note that the condition given in (6) is combinatorial in nature and hard to verify in practice. In the next section, we provide a lower bound on the number of samples so that both Assumption1

and the combinatorial condition given in (6) hold true with high probability (not deterministically, i.e., with probability one anymore).

4. Probabilistic Conditions for Finite Completability

Assume that the entries of the tensor are sampled independently with probabilityp. In this section, we are interested in obtaining a condition in terms of the number of samples, i.e., the sampling probability, to ensure the finite completability of the sampled tensor with high probability. In Section 4.1, we apply the existing results on the Grassmannian manifold in (Pimentel-Alarc´on et al., 2016d) on any of the unfoldings of the sampled tensor to derive the mentioned probabilistic condition. In Section 4.2, we obtain the conditions on the number of samples to ensure that conditions (i) and (ii) in the statement of Theorem 9 hold with high probability or in other words, to ensure the finite completability with high probability. For the notational simplicity in this section, we assume that

n1=n2=· · ·=nd, i.e.,U ∈Rn×n×...×n.

4.1 Unfolding Approach

Theorem 10 Consider ann×N matrix with the given rankkand let0< <1be given. Suppose
k ≤ n_{6} and that eachcolumnof the sampled matrix is observed in at leastl entries, distributed
uniformly at random and independently across entries, where

l >maxn12 logn

+ 12,2ko. (8)

Also, assume thatk(n−k) ≤N. Then, with probability at least1−, the sampled matrix will be finitely completable.

Observe that in the case of1< k < n−1, the assumptionk(n−k)≤N results thatn < N

which is very important to check when we apply this theorem. In order to use Theorem 10, we need the following lemma to obtain an upper bound on the rank of unfoldings ofU.

Lemma 11 Consider an arbitrary nonemptyI ⊂ {1, . . . , d}and recall thatrdenotes the CP rank
ofU. Then, rankUe_{(}_{I}_{)}

≤r.

Proof In order to show rank

e U(I)

≤ r, we show the existence of a CP decomposition ofUe_{(}_{I}_{)}

withrcomponents, i.e., we show that there existal_{i}∈_{R}mi_{for}_{1}_{≤}_{i}_{≤}_{2}_{,}_{1}_{≤}_{l}_{≤}_{r}_{,}_{m}

1 ,NIand

m2 ,N_{I}¯such that

e
U_{(}_{I}_{)}=

r

X

l=1

al_{1}⊗al_{2}. (9)

In order to do so, recall that since the CP rank ofU isr, there existbl_{i}∈_{R}ni _{for}_{1}_{≤}_{i}_{≤}_{d}_{and}
1≤l≤rsuch that

U =

r

X

l=1

bl_{1}⊗bl_{2}⊗. . .⊗bl_{d}. (10)

DefineAl

1=bli1⊗. . .⊗b

l

i|I| andA

l

2 =bli|I|+1⊗. . .⊗b

l

idfor1≤l≤l, whereI ={i1, . . . , i|I|}, ¯

I ={i|I|+1, . . . , id}. Leta1l andal2denote the vectorizations ofAl1andAl2with the same bijective
mappingsMf_{I} : (x_{i}_{1}, . . . , x_{i}

|I|) → {1,2, . . . , NI}andMfI¯: (xi|I|+1, . . . , xid)→ {1,2, . . . ,N¯I¯}of

the unfoldingUe_{(}_{I}_{)}. Hence, there existal_{i} ∈Rmi for1≤i≤2,1≤l≤rsuch that (9) holds.

Remark 12 Assume that k ≤ k0 ≤ _{6}n, l > max12 log n_{}+ 12,2k0 andk0(n−k0) ≤ N.
Then, we havel > max12 log n_{}+ 12,2k sincek≤ k0. Moreover, we havek(n−k) ≤N
sincek+k0 < nwhich resultsk(n−k)< k0(n−k0).

Lemma 13 LetI = {i1, . . . , i|I|}be an arbitrary nonempty and proper subset of{1, . . . , d}.
As-sume that|I|< d_{2} andr ≤ n

6, wherer is the CP rank of the sampled tensorU. Moreover, assume

that each columnof Ue_{(}_{I}_{)} is observed in at least l entries, distributed uniformly at random and
independently across entries, where

l >max

12 log

NIr

+ 12,2r

. (11)

Proof According to Lemma 11, rI , rank

e

U_{(}_{I}_{)} ≤ r. Note that NI ≤ n

d−1 2 = n

d+1 2

n ≤ NI¯

n

which results thatr(NI−r)≤N_{I}¯. Furthermore, according to Remark 12, we haverI(NI−rI)≤

N_{I}¯and also

l >max

12 log

NIr

+ 12,2rI

. (12)

Therefore, according to Theorem 10,Ue_{(}_{I}_{)}is finitely completable for an arbitrary value ofr_{I} that

belongs to {1, . . . , r} with probability at least1−

r. Hence, with probability at least1−, for

all possible values of rI, Ue_{(}_{I}_{)} is finitely completable, i.e., Ue_{(}_{I}_{)} is finitely completable. In order

to complete to proof, it suffices to observe that finite completability of any of the unfoldings ofU

results the finite completability ofU.

Remark 14 In the case of|I|> d_{2} in Lemma 13, we can simply consider the transpose ofUe_{(}_{I}_{)}to
have the similar results.

Remark 15 Lemma 13 requires

nd−|I|max

(

12 log n

|I|_{r}

!

+ 12,2r )

(13)

samples in total to ensure the finite completability ofU with probability at least1−. Hence, the best bound on the total number of samples to ensure the finite completability with probability at least1−will be obtained when|I|=bd−1

2 c, which is

ndd+12 emax (

12 log n

bd−_{2}1c_{r}

!

+ 12,2r )

. (14)

4.2 CP Approach

In this section, we present an approach based on the tensor CP decomposition instead of unfolding. Conditions (i) and (ii) in Theorem 9 ensure finite completability with probability one. Here, using combinatorial methods, we derive a lower bound on the number of sampled entries, i.e., the sam-pling probability, which ensures conditions (i) and (ii) in Theorem 9 hold with high probability. We first provide a few lemmas from our previous works. Lemma 16 below is Lemma5in (Ashraphijuo et al., 2017c), which will be used later.

Lemma 16 Assume thatr0≤ n_{6} and also each column ofΩ(1)(first matricization ofΩ) includes at

leastlnonzero entries, where

l >max

9 logn

+ 3 log

k

+ 6,2r0

. (15)

LetΩ0_{(1)}be an arbitrary set ofn−r0columns ofΩ_{(1)}. Then, with probability at least1−_{k}, every
subsetΩ00_{(1)}of columns ofΩ0_{(1)}satisfies

m1(Ω00)−r0≥t, (16)

The proof of the above lemma is based on a simple combinatorial idea to upper bound the number of possible patterns that do not satisfy the mentioned property.

Lemma 17 Let j ∈ {1,2, . . . , d−1} be a fixed number and I = {1,2, . . . , j}. Consider an
arbitrary setΩe0_{(}_{I}_{)} ofn−r0 columns ofΩe_{(}_{I}_{)}, wherer0 ≤ r ≤ n_{6}. Assume thatn > 200, and also
each column ofΩe_{(}_{I}_{)}includes at leastlnonzero entries, where

l >max

27 logn

+ 9 log

2k

+ 18,6r0

, (17)

where k ≤ r. Then, with probability at least 1−

2k, each column of Ωe0_{(}_{I}_{)} includes more than
l0 ,max

9 log n_{}

+ 3 log 2_{}k

+ 6,2r0 observed entries ofΩwith different values of thei-th
coordinate, i.e., thei-th matricization of the tensorΩ0 that corresponds toΩe0_{(}_{I}_{)}includes more than
l0nonzero rows,1≤i≤j.

Proof The proof is omitted due to the similarity to the proof for Lemma9 in (Ashraphijuo and Wang, 2017).

The following lemma is Lemma8in (Ashraphijuo et al., 2016a), which states that if the property in Lemma 16 holds for the sampling patternΩ, it will be satisfied forΩ˘ as well.

Lemma 18 Letr0 be a given nonnegative integer and1≤ i≤ j ≤ d−1andI = {1,2, . . . , j}.
Assume that there exists annj×(n−r0)matrixΩe_{(}0_{I}_{)}composed ofn−r0columns ofΩe_{(}_{I}_{)}such that
each column ofΩe0_{(}_{I}_{)}includes at leastr0+ 1nonzero entries and satisfies the following property:

• Denote annj×tmatrix (for any1≤t≤n−r0) composed of anytcolumns ofΩe0_{(}_{I}_{)}byΩe00_{(}_{I}_{)}.
Then

mi(Ω00)−r0 ≥t, (18)

whereΩe00_{(}_{I}_{)}is the unfolding ofΩ00corresponding to the setI.

Then, there exists annj×(n−r0)matrix_{Ω}e˘
0

(I) such that: each column has exactlyr0+ 1entries

equal to one, and if Ωe˘ 0

(I)(x, y) = 1 then we have Ωe_{(}0_{I}_{)}(x, y) = 1. Moreover, Ωe˘
0

(I) satisfies the

above-mentioned property.

The proof of the above lemma is based a novel generalization of Hall’s theorem for bipartite graph. This generalization is shown by strong induction on the number of nodes.

Lemma 19 Assume thatn >200,1 ≤i≤j ≤d−1andI ={1,2, . . . , j}. Considerrdisjoint
setsΩe0_{l}

(I), each withn−r 0

icolumns ofΩe_{(}_{I}_{)}for1≤l≤r, wherer_{i}0 ≤r≤ n_{6}. LetΩe0_{(}_{I}_{)}denote the
union of allrsets of columnsΩe0_{l}

(I)’s. Assume that each column ofΩe(I)includes at leastlnonzero entries, where

l >max

27 log

n

+ 9 log

2rk

+ 18,6r_{i}0

Then, there exists annj×r(n−r0_{i})matrix_{Ω}e˘
0

(I)such that: each column has exactlyr0i+ 1entries

equal to one, and if_{Ω}e˘
0

(I)(x, y) = 1then we haveΩe0

(I)(x, y) = 1and also it satisfies the following

property: with probability at least1−

k, every subsetΩe˘

00

(I)of columns ofΩe˘

0

(I)satisfies the following

inequality

r

mi( ˘Ω00)−ri0

≥t, (20)

wheretis the number of columns of_{Ω}e˘
00

(I)andΩ˘00is the tensor corresponding to unfoldingΩe˘

00

(I).

Proof We first claim that with probability at least1− _{kr} , every subsetΩe00_{l}

(I) of columns ofΩe 0

l(I)

satisfies

mi(Ω00l)−r0i ≥t, (21)

wheretis the number of columns ofΩe00_{l}

(I) andΩ 00

l is the tensor corresponding to unfoldingΩe00_{l}
(I).

For simplicity we denote the above-mentioned property by Property I. According to Lemma 17,
with probability at least1−_{2}_{kr} , thei-th matricization of the tensorΩ00_{l} includes more than

max9 log n_{}+ 3 log 2kr_{} + 6,2r0_{i} nonzero rows,1≤i≤j, and we denote this property by
Property II. On the other hand, given that Property II holds forΩ00_{l} and according to Lemma 16, with
probability at least1−

2kr, Property I holds forΩ

00

l as well. Hence, with probability at least1−kr ,

Property I holds forΩ00_{l}, which completes the proof our earlier claim.

Consequently, according to Lemma 18, with probability at least 1− _{kr} , there exists annj ×

(n −r0_{i}) matrix _{Ω}e˘
0

l(I) such that: each column has exactly r 0

i + 1 entries equal to one, and if

e˘ Ω

0

l(I)(x, y) = 1 then we have Ωe 0

l(I)(x, y) = 1 and also

˘

Ω0_{l} satisfies Property I. Finally define

e˘ Ω

0

(I),

e˘ Ω

0

1(I)| e

˘

Ω 0

2(I)|. . .| e˘ Ω

0

r(I)

. Since eachΩ˘0_{l}satisfies Property I with probability at least1−
kr,

all Ω˘0_{l}’s satisfy Property I with probability at least1− _{k}, simultaneously. Consider an arbitrary

subset_{Ω}e˘
00

(I) of columns ofΩe˘

0

(I). LetΩe˘

00

l(I) denote those columns of e˘ Ω

00

(I)that belong toΩe˘

0

l(I) and

definetlas the number of columns ofΩe˘

00

l(I),1 ≤ l ≤r, and definetas the number of columns of

e˘ Ω

00

(I). Without loss of generality, assume thatt1≤t2 ≤. . .≤tr. Also, assume that allΩ˘0l’s satisfy

Property I. Hence, we have

t=

r

X

l=1

tl≤rtr≤r

mi( ˘Ω00r)−ri0

≤rmi( ˘Ω00)−ri0

. (22)

Theorem 20 Assume thatd > 2, n > max{200, r(d−2)}, r ≤ n

6 andI = {1,2, . . . , d−2}.

Assume that each column ofΩe_{(}_{I}_{)}includes at leastlnonzero entries, where

l >max

27 log

n

+ 9 log

2r(d−2)

+ 18,6r

Then, with probability at least1−, for almost everyU ∈ _{R}

d

z }| {

n×. . .×n_{, there exist only finitely}

many completions of the sampled tensorU with CP rankr.

Proof Define the (d−1)-way tensor U0 _{∈}

R d−2

z }| {

n×. . .×n×n2

which is obtained through merging the(d−1)-th andd-th dimensions of the tensorU. Observe that the finiteness of the number of completions of the tensorU0 of rankr ensures the finiteness of the number of completions of the tensorU of rankr. For notational simplicity, letΩandΩ˘ denote the(d−1)-way sampling pattern and constraint tensors corresponding toU0, respectively. In order to complete the proof it suffices to show with probability at least1−, conditions (i) and (ii) in Theorem 9 hold for this modified

(d−1)-way tensor.

Now, we apply Lemma 19 for each of the numbersr0_{1} = 1, . . . , r0_{d}_{−}_{3} = 1, r0_{d}_{−}_{2} = r. Also,
note that sincen > r(d−2)we concluden2 > r(n−r) + (d−3)r(n−1), and thereforeΩe_{(}_{I}_{)}

includes more thanr(n−r) + (d−3)r(n−1)columns. According to Lemma 19, there existΩ˘0_{i}for

1≤i≤d−2such that: (i) each column of_{Ω}e˘
0

i(I)includesr 0

i+ 1nonzero entries for1≤i≤d−2,

and if _{Ω}e˘
0

i(I)(x, y) = 1then we haveΩe 0

i(I)(x, y) = 1, (ii) e˘ Ω

0

i(I) includesr(n−1)andr(n−r)

columns for1≤i≤d−3andi=d−2, respectively, (iii) with probability at least1−

d−2, every
subset_{Ω}e˘

00

i(I) of columns of e

˘

Ω 0

i(I)satisfies (20) forr 0

1 = 1, . . . , rd0−3= 1, r

0

d−2=r.

Recall that each column ofΩe_{(}_{I}_{)}includesr+ 1nonzero entries, and therefore for1≤i≤d−3

that we haver0_{i}+ 1 = 2, the column ofΩe_{(}_{I}_{)}corresponding to an column ofΩe˘
0

i(I) hasr−1more

nonzero entries.

Observe that max9 log n_{}+ 3 log 2kr_{} + 6,2r ≥ 2r ≥ (r −1) + 2. According to

Lemma 17 and given (23), for each column of_{Ω}e˘
0

i(I) there exists anotherr−1zero entries(xs, ys)

fors∈ {1, . . . , r−1}in different rows of the(d−2)-th matricization ofΩ˘0_{i}from the two nonzero

entries such thatΩe0_{i}

(I)(xs, ys) = 1,i= 1, . . . , d−3. Hence, for each column of e˘ Ω

0

i(I), we substitute

it with the column of Ωe_{(}_{I}_{)} that the value of suchr −1 entries (in different rows of the(d−2)

-th matricization of Ω˘0_{i}) is 1 instead of 0 , i = 1, . . . , d−3. Therefore, each column of Ωe˘
0

i(I)

includes exactlyr+ 1nonzero entries for1≤i≤d−3such that if_{Ω}e˘
0

i(I)(x, y) = 1then we have e

Ω0_{i}

(I)(x, y) = 1.

LetΩe˘ 0

(I)=

e

˘

Ω 0

1(I)|. . .| e

˘

Ω 0

d−2(I)

, which includesr(n−r)+(d−3)r(n−1)columns. Therefore,

˘
Ω0 ∈_{R}

d−2

z }| {

n×. . .×n×t_{is a subtensor of the constraint tensor such that}_{t}_{=}_{r}_{(}Pd−2

i=1n)−r2−r(d−3) and also and with probability at least1−, every subsetΩe˘

00

i(I) of columns of e˘ Ω

0

i(I) satisfies (20) for
r_{1}0 = 1, . . . , r_{d}0_{−}_{3}= 1, r_{d}0_{−}_{2}=r, simultaneously fori= 1, . . . , d−2.

Consider an arbitrary subset_{Ω}e˘
00

(I)of columns ofΩe˘

0

(I). LetΩe˘

00

i(I) denote those columns of e

˘

Ω 00

(I)
that belong to_{Ω}e˘

0

i(I) and defineti as the number of columns of e

˘

Ω 00

as the number of columns of_{Ω}e˘
00

(I). Also, assume that allΩ˘0l’s satisfy (20) forr01 = 1, . . . , r0d−3 =

1, r0_{d}_{−}_{2} =r. Then, we have the following two scenarios:
(i)td−2 = 0: Hence, we havet=

Pd−3

i=1 ti. Moreover, we have

ti≤r

mi( ˘Ω00i)−1

+

≤rmi( ˘Ω00)−1

+

=rmi( ˘Ω00)−1

, (24)

for1 ≤i≤d−3. Recall that each column of_{Ω}e˘
0

i(I) includes at leastrnonzero entries in different

rows of the(d−2)-th matricization ofΩ˘0_{i}for1≤i≤d−3. On the other hand, sinceΩe˘
00

(I)includes at least one column of

e˘ Ω

0

1(I)|. . .| e˘ Ω

0

d−3(I)

(recall thattd−2 = 0), we have

r≤md−2( ˘Ω00), (25)

which also results that minnmaxnm1( ˘Ω00), . . . , md−2( ˘Ω00)

o

, ro=r. Therefore, having (24) and (25), we conclude

t=

d−3

X

i=1

ti≤ d−3

X

i=1

rmi( ˘Ω00)−1

≤

d−3

X

i=1

rmi( ˘Ω00)−1

+rmd−2( ˘Ω00)−r

(26)

=r

d−2

X

i=1

mi( ˘Ω00)

!

−r2−r(d−3)

=r

_{d}_{−}_{2}
X

i=1

mi( ˘Ω00)

! −min

n

max

n

m1( ˘Ω00), . . . , md−2( ˘Ω00)

o

, ro−(d−3)

! .

(ii)td−2>0: Hence, we have

td−2 ≤r

md−2( ˘Ω00d−2)−r

≤rmd−2( ˘Ω00)−r

, (27)

which also results that min

n

max

n

m1( ˘Ω00), . . . , md−2( ˘Ω00)

o , r

o

= r. Moreover, similar to sce-nario (i), (24) holds. Therefore, having (24) and (27), we conclude

t=

d−2

X

i=1

ti ≤ d−3

X

i=1

r

mi( ˘Ω00)−1

+r

md−2( ˘Ω00)−r

=r

d−2

X

i=1

mi( ˘Ω00)

!

−r2−r(d−3)(28)

=r

_{d}_{−}_{2}
X

i=1

mi( ˘Ω00)

!

−minnmaxnm1( ˘Ω00), . . . , md−2( ˘Ω00)

o

, ro−(d−3)

! .

Remark 21 Observe that each row of thed-matricization of U consists of exactlyn columns of

e

Ω(I), whereI = {1,2, . . . , d−2}. Hence, given(23), Assumption1 holds with probability one

Note that for the general values ofn1,. . .,ndthe same proof will still work, but instead of the

assumptionn >max{200, r(d−2)}, we need another assumption in terms ofn1,. . .,ndto ensure

that the corresponding unfolding has enough number of columns.

Remark 22 A tensorU that satisfies the properties in the statement of Theorem 20 requires

n2max

27 log

n

+ 9 log

2r(d−2)

+ 18,6r

(29)

samples to be finitely completable with probability at least1−, which is lower than the number of samples required by the unfolding approach given in(14)by orders of magnitude.

The following lemma is taken from (Ashraphijuo et al., 2016a) and is used in Lemma 24 to derive a lower bound on the sampling probability that results (23) with high probability.

Lemma 23 Consider a vector with n entries where each entry is observed with probability p
independently from the other entries. If p > p0 = k_{n} + √41_{n}, then with probability at least

1−exp(− √

n

2 )

, more thankentries are observed.

The proof of the above lemma is simply based on Azuma’s inequality.

Lemma 24 Assume thatd >2,n > max{200, r(d−2)}andr ≤ n_{6}. Moreover, assume that the
sampling probability satisfies

p > 1 nd−2max

27 logn

+ 9 log

2r(d−2)

+ 18,6r

+ √_{4} 1

nd−2. (30)

Then, with probability at least(1−)1−exp(− √

nd−2

2 )

n2

,U is finitely completable.

Proof According to Lemma 23, (30) results that each column ofΩe_{(}_{I}_{)}includes at leastlnonzero

en-tries, whereI ={1,2, . . . , d−2}andlsatisfies (23) with probability at least1−exp(− √

nd−2

2 )

.

Therefore, with probability at least

1−exp(− √

nd−2

2 )

n2

, alln2 columns of Ωe_{(}_{I}_{)} satisfy (23).

Hence, according to Theorem 20, with probability at least (1−)1−exp(− √

nd−2

2 )

n2

, U is finitely completable.

5. Deterministic and Probabilistic Conditions for Unique Completability

Ωin the statement of Theorem 9 to ensure unique completability (deterministic) and also increase the number of samples given in the statement of Theorem 20 mildly to ensure unique completability with high probability (probabilistic). As the first step of this procedure, we use the following lemma for minimally algebraically dependent polynomials to obtain the variables involved in these poly-nomials uniquely. Hence, by obtaining all entries of the CP decomposition of the sampled tensorU

we can show the uniqueness ofU.

Lemma 25 Suppose that Assumption 1holds. Let Ω˘0 ∈ _{R}n1×n2×...×nd−1×t_{be an arbitrary }
sub-tensor of the constraint sub-tensorΩ˘. Assume that polynomials inP( ˘Ω0)are minimally algebraically
dependent. Then, all variables (unknown entries) ofA1, . . . ,Ad−1that are involved inP( ˘Ω0)can

be determined uniquely.

Proof According to Lemma 7, the number of involved variables in polynomials in P( ˘Ω0) =

{p1, p2, . . . , pt}ist−1and are denoted by{x1, . . . , xt−1}. Moreover, as mentioned in the proof of Lemma 7,P( ˘Ω0)\pi is a set of algebraically independent polynomials and the number of involved

variables ist−1,i= 1, . . . , t. Consider an arbitrary variablex1that is involved in polynomials in

P( ˘Ω0)and without loss of generality, assume thatx1 is involved inp1.

On the other hand, as mentioned all variables{x1, . . . , xt−1}are involved in algebraically inde-pendent polynomials inP( ˘Ω0)\p1. Hence, tuple(x1, . . . , xt−1)can be determined finitely. Given that one of these finitely many tuples(x1, . . . , xt−1) satisfy polynomial equation p1, with proba-bility one there does not exist another tuple (x1, . . . , xt−1) among them that satisfies polynomial equationp1. This is because with probability zero a tuple satisfy a polynomial equation in which the coefficients are chosen generically, and also the fact that the number of such tuples is finite.

Condition (i) in Theorem 26 results inr(Pd−1

i=1 ni)−r2−r(d−2)algebraically independent polynomials in terms of the entries ofA1, . . . ,Ad−1, i.e., results in finite completability. Hence, adding a single polynomial corresponding to any observed entry to theser(Pd−1

i=1 ni)−r2−r(d−

2)algebraically independent polynomials results in a set of algebraically dependent polynomials. Then, according to Lemma 25 a subset of the entries ofA1, . . . ,Ad−1can be determined uniquely and these additional polynomials are captured in the structure of condition (ii) such that all entries of CP decomposition can be determined uniquely.

Theorem 26 Suppose that Assumption 1holds. Also, assume that there exist disjoint subtensors

˘

Ω0 ∈ _{R}n1×n2×···×nd−1×t _{and}_{Ω}˘0i ∈ _{R}n1×n2×···×nd−1×ti _{(for}_{1} _{≤} _{i} _{≤} _{2}_{d}_{−}_{2}_{) of the constraint}

tensor such that the following conditions hold: (i) t = r(Pd−1

i=1 ni)−r2 −r(d−2)and for any t0 ∈ {1, . . . , t} and any subtensor Ω˘00 ∈

Rn1×n2×···×nd−1×t

0

ofΩ˘0,(6)holds.

(ii) fori ∈ {1, . . . , d−1} we have ti = ni −1 and fori ∈ {d, . . . ,2d−2}we have ti =

ni−d+1−r. Also, for anyti0 ∈ {1, . . . , ti} and any subtensorΩ˘00

i

∈ Rn1×n2×···×nd−1×t0i of the

tensorΩ˘0i, the following inequalities hold

mi( ˘Ω00

i

)−1≥t0_{i}, fori∈ {1, . . . , d−1}, (31)

and

mi−d+1( ˘Ω00

i

Then, for almost everyU, there exists only a unique tensor that fits in the sampled tensorU, and has CP rankr.

Proof As we showed in the proof of Theorem 9,P( ˘Ω0)includest=r(Pd−1

i=1 ni)−r2−r(d−2)

algebraically independent polynomials which results the finite completability of the sampled tensor

U. Let{p1, . . . , pt}denote thesetalgebraically independent polynomials inP( ˘Ω0). Now, having

{p1, . . . , pt}andP( ˘Ω0

i

)for1≤i≤2d−2, and using Lemma 25 several times, we show the unique completability. Recall thattis the number of total variables among the polynomials, and therefore union of any polynomialp0and{p1, . . . , pt}is a set of algebraically dependent polynomials. Hence,

there exists a set of polynomialsP( ˘Ω00)such thatP( ˘Ω00) ⊂ {p1, . . . , pt}and also polynomials in

P( ˘Ω00)∪p0are minimally algebraically dependent polynomials. Therefore, according to Lemma 25, all variables involved in the polynomialsP( ˘Ω00)∪p0can be determined uniquely, and consequently, all variables involved inp0 can be determined uniquely.

We can repeat the above procedure for any polynomialp0 ∈ P( ˘Ω0

i

)to determine the involved variables uniquely with the help of {p1, . . . , pt}, i = 1, . . . ,2d−2. Hence, for any polynomial

p0 ∈ P( ˘Ω0

i

)orp0 ∈ P( ˘Ω0

i+d−1

), we obtainr degree-1polynomials in terms of the entries ofAi

but some of the entries of CP decomposition are elements of theQi matrices (in the statement of

Lemma 3),i= 1, . . . , d−1. In order to complete the proof, we need to show that condition (ii) with the above procedure using{p1, . . . , pt}results in obtaining all variables uniquely. In particular, we

show that repeating the described procedure for any of the polynomials inP( ˘Ω0i)andP( ˘Ω0i+d−1)

result in obtaining all variables of thei-th element of CP decomposition uniquely.

According to Lemma 3, we have the following two scenarios for anyi∈ {1, . . . , d−1}: (i)Qi∈Rr×r: In this case, condition (ii) forΩ˘00i+d

−1

and for anyt0_{i}_{+}_{d}_{−}_{1}∈ {1, . . . , ni−r}and

any subtensorΩ˘00i+d−1 ∈Rn1×n2×···×nd−1×t

0

i+d−1 _{of the tensor}_{Ω}˘0i+d−1 _{results}

rmi( ˘Ω00

i+d−1

)−r2≥rt0_{i}_{+}_{d}_{−}_{1}. (33)

Note thatrt0_{i}_{+}_{d}_{−}_{1}is the number of polynomials from the above mentioned procedure corresponding
toΩ˘00i+d−1 andrmi( ˘Ω00

i+d−1

)denotes the number of involved entries ofAi in these polynomials,

and thereforermi( ˘Ω00

i+d−1

)−r2is the number of involved variables ofAiin these polynomials. As

a result, according to Fact2, given P( ˘Ω0i+d−1)and{p1, . . . , pt}, the mentioned procedure results

inrni−r2algebraically independent degree-1polynomials in terms of the unknown entries ofAi.

Therefore,Aican be determined uniquely.

(ii)Qi ∈ R1×r: Similar to scenario (i), condition (ii) forΩ˘00 i

and for anyt0_{i} ∈ {1, . . . , ni}and

any subtensorΩ˘00i ∈_{R}n1×n2×···×nd−1×t0i of the tensorΩ˘0i results

rmi( ˘Ω00

i

)−r≥rt0_{i}, (34)

and therefore similar to the previous scenario,Aican be determined uniquely.

of simplicity, as in Section 4 we consider the sampled tensor U ∈ _{R}

d

z }| {

n×. . .×n_{. Recall that}

for the general values of n1, . . ., nd the same proof will still work, but instead of assumption

n >max{200,(r+ 2)(d−2)}, we need another assumption in terms ofn1,. . .,nd.

Theorem 27 Assume thatd >2,n >max{200,(r+ 2)(d−2)},r≤ n_{6} andI ={1,2, . . . , d−2}.
Assume that each column ofΩe_{(}_{I}_{)}includes at leastlnonzero entries, where

l >max

27 log

2n

+ 9 log

8r(d−2)

+ 18,6r

. (35)

Then, with probability at least 1−, for almost every U ∈ _{R}

d

z }| {

n×. . .×n_{, there exist only one}

completion of the sampled tensorU with CP rankr.

Proof Similar to the proof of Theorem 20, define the(d−1)-way tensor U0 ∈ R d−2

z }| {

n×. . .×n×n2

which is obtained through merging the(d−1)-th andd-th dimensions of the tensorU and recall
that the finiteness of the number of completions of the tensorU0_{of rank}_{r}_{ensures the finiteness of}

the number of completions of the tensorU with rankr. Similarly, for simplicity, assume thatΩand

˘

Ωdenote the(d−1)-way sampling pattern and constraint tensors corresponding toU0, respectively.
Note that sincen >(r+ 2)(d−2), we concluden2 > r(n−r) + (d−3)r(n−1) + 2(d−2), and
thereforeΩe_{(}_{I}_{)}includes more thanr(n−r) + (d−3)r(n−1) + 2n(d−2)columns. According to

the proof of Theorem 20, consideringr(n−r) + (d−3)r(n−1)arbitrary columns ofΩe_{(}_{I}_{)}results

in existence ofΩ˘0 such that condition (i) holds with probability at least1− _{2}. Also, there exist at
least2n(d−2)columns other than theser(n−r) + (d−3)r(n−1)columns.

Considern−1arbitrary columns ofΩe_{(}_{I}_{)}. By settingr= 1in the statement of Lemma 19, these
n−1columns result inΩ˘00iwithn−1columns such that with probability at least1−

4r(d−2), (31)
holds. Similarly, considern−rarbitrary columns ofΩe_{(}_{I}_{)}. Then, there existsΩ˘00

i+d−2

such that with probability at least1−

4r(d−2), (32) holds. Hence, condition (ii) holds (for alli∈ {1, . . . ,2d−4}) with probability at least 1−

2r. Therefore, conditions (i) and (ii) hold with probability at least

1−

2r+

2

≥1−.

Remark 28 A tensorU that satisfies the properties in the statement of Theorem 27 requires

n2max

27 log

2n

+ 9 log

8r(d−2)

+ 18,6r

(36)