• No results found

Canonical Graph Decomposition in Matching

N/A
N/A
Protected

Academic year: 2020

Share "Canonical Graph Decomposition in Matching"

Copied!
68
0
0

Loading.... (view fulltext now)

Full text

(1)

YAGHI, HAYTHAM H. Canonical Graph Decomposition in Matching. (Under the direction of Professor H. Krim).

(2)

by

Haytham H. Yaghi

A thesis submitted to the Graduate Faculty of North Carolina State University

in partial fullfillment of the requirements for the Degree of

Master of Science

Electrical Engineering

Raleigh, North Carolina

2008

Approved By:

Dr. H. Dai Dr. C. D. Savage

Dr. H. Krim

(3)

DEDICATION

(4)

BIOGRAPHY

(5)

ACKNOWLEDGMENTS

I would like to express my gratitude to Dr. Krim, whose constant guidance, support and comments made this work possible. I would also like to thank Dr. Savage and Dr. Dai, whose feedback was relevant and very helpful.

(6)

TABLE OF CONTENTS

LIST OF TABLES . . . vi

LIST OF FIGURES . . . vii

1 Introduction . . . 1

2 Background and Notation . . . 3

3 Graph Decomposition . . . 7

3.1 Tree Decomposition . . . 7

3.2 Extension to Graphs . . . 11

3.2.1 Modified BFS . . . 11

3.2.2 Separators for Graph Decomposition . . . 16

3.2.3 Extremities of a Decomposed Graph . . . 19

3.3 Algorithmic Graph Decomposition . . . 21

3.3.1 Algorithm . . . 21

3.3.2 Analysis of the Algorithm . . . 22

4 Graph Matching by Subgraph Decomposed . . . 26

4.1 Representation of a Decomposed Graph . . . 26

4.2 BARG Matching . . . 29

4.3 Graph Matching Using Probabilistic Relaxation . . . 30

4.3.1 Recurrence Derivation . . . 32

4.3.2 Recursive Algorithm . . . 34

4.4 Analysis of the Belief Propagation Approach and Limitations . . . 36

4.5 Modified Probabilistic Relaxation Method . . . 42

5 Implementation and Results . . . 44

5.1 Implementation Details . . . 44

5.1.1 Atom Representation . . . 44

5.1.2 Convergence Criteria . . . 46

5.2 Results . . . 46

5.3 Conclusion . . . 54

(7)

LIST OF TABLES

(8)

LIST OF FIGURES

Figure 2.1 Two isomorphic graphs. . . 5

Figure 2.2 A graph and its separators. . . 6

Figure 3.1 Splitting a tree through its internal node. . . 9

Figure 3.2 Algorithmic tree decomposition. . . 10

Figure 3.3 Elimination game. . . 12

Figure 3.4 Chordless Graphs. . . 14

Figure 3.5 Sample MCS-M run on a graphG. . . 15

Figure 3.6 Decomposition along clique separators.. . . 18

Figure 3.7 New classification for graph extremities. . . 20

Figure 3.8 Graph Decomposition Example. . . 23

Figure 3.9 Comparing single vertex separators to more general separators. . . 25

Figure 4.1 Representing a decomposed graph as a BARG. . . 28

Figure 4.2 Illustration of belief propagation. . . 39

Figure 4.3 Shortcoming of belief propagation. . . 40

Figure 4.4 Matching a graph to a replica of itself. . . 41

Figure 5.1 Two graphs with cospectral adjacencies. . . 45

Figure 5.2 Matching two isomorphic graphs. . . 48

Figure 5.3 Matching two isomorphic graphs having an element of symmetry. . . 49

(9)

Figure 5.5 Illustrative test case. . . 52

Figure 5.6 Illustrative test case with random noise. . . 53

(10)

Chapter 1

Introduction

Used in various applications, graphs arise as an efficient representation for data and relations. To name a few, these applications include mathematical analysis, phys-ical mechanics, electrphys-ical engineering, biology etc. More recent and interesting work has exploited graphs and graph algorithms in pattern recognition and 3Dobject iden-tification [1, 2, 3, 4, 5].

A recurrent theme in many of these applications is that of establishing an iso-morphism between two graphs, or in non-technical terms, being able to tell whether two given graphs represent the same data and/or relations. Another related problem is that of establishing subgraph isomorphism, or being to able to tell whether some subset of a larger graph represents the same data and/or relations as another graph. We discuss later that the first problem is conjectured to beNP-hard and the second is proven to be NP-hard. Most advances in these fields are presented in the form of heuristics that try to solve the problem suboptimally, yet with a polynomial com-plexity.

(11)

differ-ent solution of the smaller inputs.

Back in 1932, Hassler Whitney, a prominent American mathematician had the in-tuition of decomposing any graph into unique components, as an approach to solving complex graph problems. Can this “divide and conquer” approach be used to solve the graph isomorphism problem? In this thesis, we describe our contribution, namely representing the uniquely decomposed components into a bipartite attributed graph. We have also proposed a statistical matching heuristic to be applied the generated bipartite graph yielding a polynomial time, suboptimal solution to the graph isomor-phism problem.

(12)

Chapter 2

Background and Notation

This chapter will present key concepts in graph theory. One of the good references on the subject can be found in [6]. We will only present here concepts that are needed for our work. A graph is defined as a set of nodes,V ={v1,· · · , vn}connected to each other by edges from the set, E = {e1,· · · , em}. The graph is denoted as G(V, E). Although many graph representations are used, one practical representation of a graph is given by its adjacency matrix. An adjacency matrix An×x has the following entries:

aij =

(

1 if (vi, vj)∈E 0 otherwise.

It is to be noted, however, that the adjacency matrix representation is not unique and depends on the vertex labeling. The labeling of vertices is usually carried out arbitrarily and depends on the user’s point of view. Fig. 2.1 shows an example of two graphs exhibiting the exact same structure, yet having different vertex labels and hence different adjacency matrices. G1 has the following adjacency matrix A1:

A1 =

         

0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 0

(13)

while G2 has a different adjacency matrix G2:

A2 =

         

0 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0

         

Although the above two graphs have different adjacency matrices, they both ex-hibit the same structure, and are hence isomorphic. Two graphs are isomorphic if they represent the same relation between vertices, irrespective of vertex labeling. Mathematically, two graphs are isomorphic if and only if they have similar adjacency matrices, where “similarity” is defined as:

A1 =P A2P−1 for some permutation matrix P. (2.1)

It is conjectured that the graph isomorphism problem is NP-hard. The only known algorithm for an optimal solution thus far is the brute force solution which can be found by generating all possible permutation matrices. This approach has a “catas-trophic” running time complexity which is exponential inn. This is primarily due to the required generation of all possible P’s. The subgraph isomorphism problem can be solved in a similar way by generating all possible permutations and checking if one matrix is replicated as a submatrix in the other.

In trying to address the complexity of common graph problems, Whitney proposed in [7] a “divide and conquer” approach that decomposes any given graph into smaller subgraphs. To do so, Whitney proposed to separate a graph along its cutvertices. A cutvertex is a vertex whose removal splits the graph into two or more connected components. Referring back to the graph G1 in Fig. 2.1, only vertex v5 and no other

is a cutvertex. As noticed by Whittney, this decomposition approach guarantees a unique decomposition of a graph, irrespective of vertex labels or decomposition order.

(14)

1

2

3 4

5

5

2

3 4

1

G1 G2

Figure 2.1: Two isomorphic graphs.

The usefulness of this intuition and its extensions to more general graph decompo-sition algorithms is left for discussion in later chapters. For now, we will focus on defining key concepts that will be used in the general decomposition approach:

• We define a path p as a sequence of vertices, p = {v1, v2,· · · , vk} such that (vi, vi+1)∈E for i= 1,· · · , k−1 and such that no vertex is traversed twice. If

v1 =vk, a path is called a cycle.

• The neighborhood of a vertex v will be denoted byN(v) ={u∈V|(v, u)∈E}

and represents the set of vertices connected to v by an edge in E.

• A clique is defined as being a subset of verticesVc⊆V such that for every two verticesv1, v2 ∈Vc, (v1, v2)∈E. In other words, a clique is a subset of vertices

that are pairwise connected by an edge in E.

• Finally, we define a simplicial vertex following the definition of [8] and the terminology used in [9]. A vertex is called simplicial if its neighborhood is a clique.

(15)

decomposition and considers more general graph separators. We leave the detailed discussion of a particular decomposition approach for the next chapter, although we provide here the required technical background. A separator of a graph is a subset S ⊆ V, such that the removal of S separates the graph in two or more components. In the simple case, a cutvertex is a separator of cardinality one. Note however, that any subset of nodes including a separator is in turn a separator:

S is a separator⇒So is also a separator,∀So ⊇S

Therefore, a less ambiguous definition is needed. Dirac introduces in [10] the concept of a minimal separator. Sis called a minimal separator if it separates any two vertices of a graph, and no other subset of S separates those same two vertices. Consider the graph shown in Fig. 2.2, where both subsets {3} and {3,2} separate vertex 1 from vertex 5. However, only {3} is a minimal separator. Upon removal of S, we denote the set of connected components that are generated by C(S).

1

2

3

4

5

6

7

Figure 2.2: A graph and its separators.

(16)

Chapter 3

Graph Decomposition

In this chapter, we present an approach to graph decomposition similar to what was developed by Berry et al. in [11]. This approach to graph decomposition con-siders general clique minimal separators, building on Whittney’s approach who had considered only cutvertices. This decomposition approach mirrors the process of al-gorithmic tree decomposition. Therefore, we will discuss first the process of tree decomposition which provides a motivation for general graph decomposition.

3.1

Tree Decomposition

(17)

ver-tices. It can be shown that such an order can be generated through a breadth first search (BFS) or a depth first search (DFS) run on the tree. In what follows, we will concentrate on BFS, since a modified BFS approach will be used later for generating a decomposition order for general graphs.

Consider a tree withn vertices and root vertexvR. A BFS decomposition ordering αis generated by Algorithm 1. Note that in a sample run of BFS, the tree extremities are always ordered last. A sample ordering and a few decomposition steps until full tree decomposition are given in Fig. 3.2.

Algorithm 1Breadth First Search (BFS) k=n

insertvR into a first-in-first-out (FIFO) queue while the queue is not empty do

dequeue vertex vi α(vi)←k

k =k−1

insert all unordered neighbors of vi into the queue end while

return ordering α

(18)

a

b

c

d

e f

g i

h

a

b c

d

e f

b

g i

h

(19)

a b c d e f g i 1 2 6 4 3 h 5 7 8 9 a b c d e f i 6 4 3 5 7 8 9 g i h a b c d i 6 5 7 8 9 g i i h c e c f a b c d i g i i h c e c f 2 b b a

(20)

3.2

Extension to Graphs

When extending the tree decomposition approach to graph decomposition and drawing the corresponding analogies, three questions need to be addressed:

• What is the natural extension of a BFS algorithm that still produces a decom-position ordering?

• Given that the extremities of a tree are clearly defined as leaves, what are the extremities of a graph?

• Given that the minimal separators of a tree are single nodes that split the tree in a unique way, what constitutes a good separator when considering general graphs? We also require these graph separators to provide a unique decompo-sition.

We will discuss these issues in a chronological order and as they were addressed in the literature.

3.2.1

Modified BFS

What is of interest in a BFS ordering of a tree, is the fact that vertices being ordered last have a neighborhood which is a minimal separator of the tree. Upon the splitting of these vertices, the remaining vertices with lowest order have in turn a neighborhood which is a minimal separator. This property holds till the complete decomposition of the tree.

(21)

Matrix Analytic Point of View

Consider a sparse matrix M, on which gaussian elimination is to be performed. At each step of the elimination process, some zero elements may turn into non-zero elements. We are interested in finding the similar matrixP M P−1 for all permutation

matrices P, that minimizes the number of zero elements turning into non-zeros. It is easy to see to map this problem to our graph ordering problem, by lettingM represent an arbitrary adjacency matrix of the graph and P represent a particular ordering ordering of the vertices. This problem is presented in broader scope in [14, 15].

Graph Algorithmic point of View

The vertex ordering problem may also be formulated from a perspective that has been termed in the literature as the elimination game. Consider a graph G with a random ordering α on the vertices. At each step, select the vertex vi with the lowest numerical label and split it from the graph. For all neighboring vertices of vi, {vk ∈ N(vi)}, add a fill-in edge (vk1, vk2) if (vk1, vk2) ∈/ E. The problem is

then formulated as finding the ordering that minimizes the number of added fill-in edges [12, 13]. This is illustrated in Fig. 3.3. Given a graph G, we remove the vertex with numerical label one and add the corresponding fill-in edges (represented by dashed line).

1 2

3

4

5

2 3

4

5

(22)

Note that no fill-in is added if at each step, the neighborhood of the vertex to be removed is a clique. This allows the definition of a new class of graphs where no fill-in is added for some orderingα. These graphs have been termed in the literature as chordal graphs. Another property of chordal graphs is the following: A graph is chordal if it has no chordless cycles of length four or more. A chordless cycle is one which has no chords joining any two nodes along the path. In Fig. 3.4, the graph on the left has a chordless cycle with length six, while the addition of a chord splits the graph into two chordless cycles each of length four. Note that the notion of a chordal graph is similar to that of a tree, as a tree is a graph with no cycles, whereas a chordal graph has cycles of length three or less. From the definition presented above, it is clear to see that Fulkerson and Gross’s characterization of chordal graphs holds [16]:

Theorem 2 In a chordal graph, one can repeatedly find a simplicial vertex and re-move it until no vertices are left.

(23)

Algorithm 2MCS-M for all vi ∈V do

W(vi) = 0 {Assign null weights to all vertices} end for

for i=n to 1 do

Pick an unordered vertex vo with maximum weight α(vo) =i

for all unordered verticesuo such that there exist a path {vo, v1, v2,· · · , vk, uo} where the vi’s are unordered and W(vi)< W(uo) ∀i= 1,2,· · · , k do

W(uo) = W(u0) + 1

add {vo, uo} toEf end for

end for return α, Ef

C h o rd le s s c y c le s o f le n g th 6

T w o c h o rd le s s c y c le s o f le n g th 4

(24)

1 0

9

8

7

4

5

6

3

2

1

(25)

3.2.2

Separators for Graph Decomposition

Following a similar motivation as Whitney, Tarjan proposes in [19] a decomposi-tion approach for graphs that “divides and conquers” NP-hard problems. He con-siders only separators having a specific structure, particularly separators that form cliques. Tarjan argues that this approach can be used to solve many graph theoretic problems such as the graph coloring problem, maximal clique problem, maximum independent sets problem etc. His proposed approach is based on decomposing a graph along its clique separators and replicating those in the generated components. However, Tarjan noted that his approach has its limitations and does not guarantee a unique decomposition for graphs. The counter example that he provided is illus-trated in Fig. 3.6. This graph has two clique separators b, c, d, b, d . On the left hand side, the graph is decomposed first along{b, c, d}and then along{b, d}. On the right hand side, the graph is decomposed first along {b, d}. However the subgraphs resulting from the latter decomposition do not contain any other clique separators and the decomposition ends. The generated subgraphs on the left-hand side are different from those on the right-hand side, contradicting the uniqueness of the decomposition approach.

Building on Tarjan’s idea, Berry et al. [11] proposed another decomposition ap-proach that uses clique minimal separators, thus enforcing the uniqueness of the approach. A clique minimal separator is both a minimal separator of the graph and a clique. Looking back at Tarjan’s counterexample, although both b, c, d, b, d are clique separators, onlyb, d is minimal (in other words it disconnects the same compo-nents as {b, d, e}). The proof of uniqueness is presented in [11] . We will be satisfied with a sketch of the proof that uses the following important lemma:

Lemma 1 A clique minimal separator So cannot be a non-separable component of the graph. In other words, So ∈ C/ (S) for some separator S.

(26)

the decomposition were not unique, then A would intersect other components of the graph. Assume thatA∩C1 6=φ andA∩C1 6=φfor some componentsC1 and C2 such

that C1, C2 ∈ C(So) for someSo. SinceA intersects bothC1 and C2, then there must

be nodes v1 ∈C1 and v2 ∈C2 connected by a path passing through A. Hence, A∩S

is a separator for v1 and v2, which would imply that A∩S separates A contradicting

(27)

a

b

c

d e

{b ,c ,d } {b ,c }

{b ,c }

(28)

3.2.3

Extremities of a Decomposed Graph

An extremity of a tree can be easily defined as a vertex whose neighborhood is a minimal separator. This concept of extremity is in fact irreducible, and the fully decomposed tree is non-separable – to use the language of Whitney’s theorem. It is clear from what has been presented above that the modified BFS, or MCS-M, yields a decomposition ordering on the vertices of the graph. Taking a closer look at this ordering, one realizes that the neighborhood of the vertex with lowest numerical label is not necessarily a clique minimal separator. Consider the example shown in Fig. 3.7, where the vertices are labeled with alphabetical letters and their ordering is shown in numbers. Vertexe with lowest order has the neighborhood {c, d, f}which is a clique separator but not minimal (the clique minimal separator of vertexewould be {c, d}). In order to work around this problem, Berry et al. propose in [9] a new insight and provide a new identification of graph extremities. This can be motivated by observing Fig. 3.7, and noting that although neither N(e) = {c, d, f} nor N(f) = {c, d, e} is a minimal clique separator, N(e)∩N(f) = {c, d} is in fact one. Therefore, instead of looking at the neighborhood of a single vertex, it may be required to look at the common “external neighborhood” of a subset of vertices. A new class of graph extremities analogous to those generated by a tree decomposition approach, can be defined as such:

Following Berry’s terminology, a module, M ⊆V, is defined as follows:

(29)

a minimal clique separator of the graph.

A key issue is to be answered however: Do these moplexes always exist in separable graphs? It has to be noted that the same can also be asked for less complex graphs, namely trees: Do extremities always exist in trees? And the answer is obvious:

Theorem 3 Every separable tree has two non-adjacent leaves.

Dirac generalized this concept to chordal graphs in [10]:

Theorem 4 Every separable chordal graph has two non-adjacent simplicial vertices.

Finally, the concept has been generalized to any graph in [9]:

Theorem 5 Every separable graph has two non-adjacent moplexes.

Hence, these theorems show that one can always find a starting point for decomposi-tion in any separable graph.

a /3

b /4

c /5

d /6

e /1

f/2

(30)

3.3

Algorithmic Graph Decomposition

3.3.1

Algorithm

Based on everything that has been presented above, a decomposition algorithm can be constructed for graphs, guaranteeing a coherent and unique decomposition [11]:

• A decomposition ordering can be generated using the modified BFS, or MCS-M presented above

• Starting with the vertex with lowest order, one can identify the successive mo-plexes to decompose

• Split the moplex only if its external neighborhood is a clique minimal separator Similar to a tree decomposition, we will replicate every clique minimal separator, S, into its generated components C(S), until the graph cannot be further decomposed. We will call the generated set of non-separable subgraphs the set of atoms of the graph. The complete decomposition procedure is given in Algorithm 3.

Algorithm 3Graph Decomposition Algorithm

given a graphGwithnvertices, generate a decomposition ordering,α, using MCS-M.

for i= 1 to n−1 do

identify the moplex Mi, in the filled-in graph, containing vi such thatα(vi) =i if the external neighborhood of Mi forms a clique Si in the original graphthen

the subgraph formed withMi∪Si forms an atom end if

split the moplex from the graph: G←G−Mi

update i with the maximum order in the moplex: i= maxu∈Miα(u) + 1

end for

the remaining graph forms the last atom

(31)

has been generated. Starting with the vertex with the lowest order, the first moplex is found: M1 = {j}. although the neighborhood of M1, N(M1) = {g, h, i} forms

a clique in the filled in graph, it does not form one in the original graph. Hence, no decomposition at this step. The next vertex having the lowest numerical label is vertex h, which generates moplex M2 = {h, i}. The external neighborhood of M2,

N(M2) = {f, g} forms a clique in the original graph and hence is a clique minimal

separator S2. therefore, we split S2 and the connected component that it generates,

which forms the first atom {f, g, h, i, j}. Continuing as such:

• Looking at vertexewhich forms a moplex in itself and its external neighborhood, vertex d which forms a clique, we split the second atom {d, e}.

• Looking at vertexg which forms a moplex{f, g}, having an external neighbor-hood {c, d} which forms a clique, we split the third atom {c, d, f, g}.

• Looking at vertex{c}which forms a moplex in itself, having external neighbor-hood {a, d} which forms a clique, we split the fourth atom{a, c, d}.

• Finally, The remaining subgraph{a, b, d} forms the last atom.

3.3.2

Analysis of the Algorithm

The first step in this algorithm is generating a decomposition ordering, which runs inO(|V| · |E|) as mentioned above, where |V|is the number of vertices and|E| is the number of edges. The decomposition itself runs in O(|V| · |E|) as well, as it involves processing every node along with its neighborhood. Therefore the total complexity of the approach isO(|V| · |E|).

(32)

a /1 0

b /9 d /8

c /7

e /4 g /5

f/6

i/3 h /2

j/1

Figure 3.8: Graph Decomposition Example.

nS be the total number of clique minimal separators. Then the total number of atoms generated is:

nA =

nS

X

i=1

(ηi−1) + 1 = nS

X

i=1

(ηi)−nS+ 1 (3.2) The last added 1 term is to account for the last atom generated.

(33)

the minimal separators of the graph. A key motivation in Tarjan’s work was looking for separators having a particular structure, namely cliques. This selection of struc-ture can be tweaked for other practical applications, namely for pattern recognition and object identification. If it is statistically observed that for a given application, a set of subgraphs is present most of the time, one can decompose any given graph based on this a priori knowledge. Note, however, that this set of generalized sep-arators has to be “orthogonal” in the sense that no separator is subset of another, similar to Berry’s idea of considering only minimal separators.

(34)

U

si

n

g

si

n

g

le

ve

rt

e

x

se

p

a

ra

to

rs

Mo

re

ge

ne

rals

ep

ara

tors

(35)

Chapter 4

Graph Matching by Subgraph

Decomposed

4.1

Representation of a Decomposed Graph

After decomposing a graph into a unique set of atoms, one is next interested in using this set for a particular representation of the graph that can be used for matching purposes. As explained in the previous chapter, the atoms of any graph are constructed by replicating the non-separable components along with the clique minimal separator that disconnected them. It would be meaningful as well to be able to preserve information about the connectivity of these atoms as a set. That is why we propose to use a bipartite attributed relation graph (BARG) to represent the information conveyed by the atoms, their graphical structure and the adjacency between them. A bipartite graph is one in which the set of vertices can be divided in two distinct sets,V1 and V2 such that all edges inE connect a vertex inV1 to another

inV2. In other words:

u, v ∈Vi ⇔(u, v)∈/ E

(36)

separa-tors. Doing so, one can assign the edges based on the adjacencies that can be derived from the original graph. It is clear that atoms inV1 cannot be direct neighbors with

one another, without first passing through their common clique minimal separator. Considering back the decomposed graph from Chapter 3, it can be represented as a BARG as shown in Fig. 4.1, in which every atom is connected to its adjacent clique minimal separators. The next step is to collapse every atom and every clique minimal separator into an attributed vertex that retains the information of the original graph-ical structure. One advantage for this representation is that it results on average in a graph having less vertices than the original. But the key advantage is in the fact that the vertices are attributed, which will be exploited for our matching approach.

(37)

A to m s C liq u e M in im a l S e p a ra to rs

(38)

4.2

BARG Matching

Following the decomposition step, the problem of establishing an isomorphism or a subisomorphism between two graphs, G1 and G2, may be formulated as that of

establishing a match between the corresponding decomposed structures. The result of any matching approach is a match matrix M having elements:

mij =

(

1 if structure ifrom G1 maps to structurej from G2

0 otherwise.

It may seem at first, that no real reduction in complexity is achieved. The prob-lem is still that of matching two graphs, for which an optimal solution can only be generated inNP time. Although the BARG may have fewer vertices than the original graph, each of these vertices is attributed, and either representation carries the same complexity as the other. Work previously proposed in the literature tries to over-come this complexity by working with heuristics. Some of these proposed heuristics which reduce complexity by orders of magnitude and which yield polynomial time algorithms are of statistical nature, but at the expense of being suboptimal. These approaches may either fail to converge, or converge to the wrong solution when the search is trapped in local extrema instead of absolute extrema. An extensive review of probabilistic graph matching algorithms can be found in [21][22], where the two most interesting approaches simulate either a Viterbi-like approach or a belief propagation-like approach .

(39)

do-main. Solving the original problem in a different domain, however, gives a suboptimal solution that can be argued to be close to the optimal one for certain applications. Therefore, the main idea is to allow the match matrix to take continuous values in the range [0,1], and iterate while updating these values at each step with respect to the cost function, until convergence is reached. The updating process is of statistical nature, and does not search the entire solution space, thus reducing the complexity into a polynomial time one. In the next section, we discuss a belief propagation-like approach to solving the match problem.

4.3

Graph Matching Using Probabilistic Relaxation

As mentioned above, the nomenclature of this approach comes from the fact that it relaxes a discrete optimization problem into a continuous one by reformulating it probabilistically. The theory of probabilistic relaxation approaches is discussed in detail in [23]. The original matching problem is formulated in Bayesian terms where a match probability is assigned to each pair (v, u) ∀v ∈ V1, u ∈ V2. A suboptimal

solution is generated by a recursive algorithm that updates the assignment probabil-ities at each step. This recursion is analogous to belief propagation, a well known algorithm in coding theory [24, 25], where at each iteration, a message passing takes place between neighboring vertices of a network until convergence is reached.

The matching problem can be formulated as such: Let the two graphs that are to be matched be G1(V1, E1) and G2(V2, E2), where V1 and V2 are the sets of

at-tributed vertices and E1 and E2 the sets of edges in each of the graphs. Since the

vertices are attributed, define a function θ :V →Σ that assigns to each vertex from V1 or V2 an attribute from the set of possible attributes Σ. These attributes carry

(40)

that P (T(v) = u)∈[0,1] for all v ∈V1,uinV2.To simplify the notation, we write Tv instead of T(v) and θv instead of θ(v).

To proceed, one arbitrarily specifies V2 as a template graph and V1 as an object

being matched to the template. Hence the matching problem can be reformulated as that of finding the vertex u ∈ V2 that maximizes a certain match probability for a

vertexvi ∈V1. This is in turn performed for all verticesvi ∈V1. In belief propagation

networks, this match probability looks at the match of a vertex vi ∈ V1 and its

neighborhood on one side, to an image vertex u ∈V2 and the image’s neighborhood

on the other. In as much as we want individual vertices in V1 to match vertices in

V2, we also want the neighborhood of the former to match the neighborhood of the

latter. This key idea behind belief propagation algorithms is maximizing the following probability:

max u∈V2

P Tvo =u|θvo, θvj, Au,Tvj, ∀vj ∈N(vo)

(4.1)

where Au,Tvj refers to the adjacency between vertex u ∈ V2 and the image of vertex

vj where vj ∈ V1 but Tvj ∈ V2. A simple example is illustrated in Fig. 4.2. When

processing vertex a in G1, one can match it, according to its attributes, to all three

verticesf, g, h inG2. However, investigating the problem a step further, one sees that

the neighborhood of vertex a, N(a) =d only matches the neighborhood of vertex f, N(f) = i.

The maximization presented in Eq. (4.1) indicates that the goal is to match ver-tices in the two graphs, based not only on the vertex attributes but on adjacency information as well. This is seen from the arguments of the probability function: when assigning vertex vo from G1 to vertex u from G2, one takes into account the

attribute Tvo, and the attributes of its neighbors, Tvj∀vj ∈ N(vo), and make sure to

(41)

4.3.1

Recurrence Derivation

Following the derivation presented in [23], the maximization equation (4.1) may be transformed into a recursion: using Baye’s rule, we rewrite:

P Tvo =u|θvo, θvj, Au,Tvj, ∀vj ∈N(vo)

=

P

Tvo =u, θvo, θvj, Au,Tvj, ∀vj ∈N(vo)

Pθvo, θvj, Au,Tvj, ∀vj ∈N(vo)

(4.2)

Using the total probability theorem, one can write

P(E1 =e1) =

X

e2

P(E1 =e1,E2 =e2).

LetE2 = (Tv1 =u1,· · · , Tvn =un), we rewrite Eq. 4.2

P Tvo =u|θvo, θvj, Au,Tvj, ∀vj ∈N(vo)

=

P

u1∈V2· · · P

un∈V2 P

uo∈V2 P

u1∈V2· · · P

un∈V2

×

×

P Tv1 =u1,· · · , Tvn =un, Tvo =u, θvo, θvj, Au,Tvj ∀vj ∈N(vo)

P Tvo =uo, Tv1 =u1,· · · , Tvn =un, θvo, θvj, Au,T(vj) ∀vj ∈N(vo)

(4.3)

The term that is present in both numerator and denominator of Eq. (4.3) can be rewritten as:

P Tv1 =u1,· · · , Tvn =un, θvo, θvj, Au,Tvj ∀vj ∈N(vo)

=

P θvo, θvj ∀vj ∈N(vo)|Tvo =uo,· · · , Tvn =un, Au,Tvj

×

P Tvo =uo,· · ·, Tvn =un, Au,Tvj ∀vj ∈N(vo)

(4.4)

(42)

of all other vertices. Hence:

P θvo, θvj ∀vj ∈N(vo)|Tvo =uo,· · · , Tvn =un, Au,Tvj

=

P θvo, θvj |Tvo =uo, Tvj =uj

(4.5)

Using the above, along with the fact that attributes are independent of one another, one rewrites:

P θvo, θvj ∀vj ∈N(vo)|Tvo =uo,· · · , Tvn =un, Au,Tvj

=

P (θvo |Tvo =uo)×

Y

vj∈N(vo)

P θvj |Tvj =uj

(4.6)

Looking at the second term in Eq. (4.4), one can rewrite:

P Tvo =uo,· · · , Tvn =un, Au,T(vj), ∀vj ∈N(vo)

=

P Au,T(vj), ∀vj ∈N(vo)|T(vo) =uo,· · · , T(vn) = un

×

P(T(vo) = uo,· · · , T(vn) =un) (4.7)

Another reasonable assumption is that the edges (v1, v2) ∈ E where v1, v2 ∈ V2

are independent of one another. In other words, Au,Tvj is independent ofAu,Tvk when j 6= k. Moreover, the adjacency Au,Tvj, where u = Tvi, depends only on Tvi and Tvj

and not on any other Tvk. So rewrite Eq. (4.7) as:

P Tvo =uo,· · · , Tvn =un, Au,Tvj, ∀vj ∈N(vo)

=

 Y

vj∈N(vo)

P Au,uj

×P Tvj =uj

×P (Tvo =uo) =

 Y

vj∈N(vo)

Au,uj×P Tvj =uj

(43)

Combining these results Eq. (4.6) and Eq. (4.8) into Eq. (4.3), we get:

PTvo =u|θvi, θvj, Au,Tvj, ∀vj ∈N(vo)

=

P

u1∈V2· · · P

un∈V2 P

uo∈V2 P

u1∈V2· · · P

un∈V2

×

×

h Q

vj∈N(vo)Au,uj ×P θvj |Tvj =uj

P Tvj =uj

i

h Q

vj∈N(vo)Auo,uj ×P θvj |Tvj =uj

P Tvj =uj

i ×

P (Tvo =u)×P(θvo |Tvo =u)

P (Tvo =uo)×P(θvo |Tvo =uo)

Dividing numerator and denominator byP(θvo)×

Y

vj∈N(vo)

P(θvj) :

=

P

u1∈V2· · · P

un∈V2 P

uo∈V2 P

u1∈V2· · · P

un∈V2

×

×

h Q

vj∈N(vo)Au,uj×P Tvj =uj |θvj

i

×P (Tvo =u|θvo)

h Q

vj∈N(vo)Auo,uj ×P Tvj =uj |θvj

i

×P(Tvo =uo |θvo)

where the terms can be rearranged:

= P(T(vo) = u|θvo)×Q(Tvo =u)

P

uo∈V2P (Tvo =uo |θvo)×Q(Tvo =u)

(4.9)

where:

Q(Tvo =u) =

X

u1∈V2

· · · X

un∈V2 

 Y

vj∈N(vo)

Au,uj ×P Tvj =uj |θvj

Since each of the factors depends only on one summation, this can be simplified as:

Q(Tvi =u) =

Y

vj∈N(vi)

X

uj∈V2

Au,ujP Tvj =uj |θvj

(4.10)

4.3.2

Recursive Algorithm

Eq. (4.9) can be viewed as a recursion, where:

P(n+1)(Tvo =u|θvo) =

P(n)(T

vo =u|θvo)·Q

(n)(T

vo =u)

P

uo∈V2P

(n)(T

vo =uo |θvo)·Q

(n)(T

vo =uo)

(44)

where:

Q(n)(Tvo =u) =

Y

vj∈N(vo)

X

uj∈V2

P(n) Tvj =uj |θvj

·Au,uj (4.12)

Assuming that the attributes θ(·) do not change from one iteration to another, one can simplify the recursion to:

P(n+1)(Tvo =u) =

P(n)(T

vo =u)·Q

(n)(T

vo =u)

P

uo∈V2P

(n)(T

vo =uo)·Q(n)(Tvo =uo)

(4.13)

and:

Q(n)(Tvo =u) =

Y

vj∈N(vo)

X

uj∈V2

P(n) Tvj =uj

·Au,uj (4.14)

This recursion illustrates the core of the belief propagation approach. It involves message passing between a vertex and its neighbors, until the assignment converges. In Eq. (4.13), the term Q(n) represents the information that conveys the support

received from the neighboring vertices for the particular assignmentT(vi) = u. Note that the recursion is independent of the original attributes θ(·). Attribute informa-tion will be useful, however, in assigning the initial probabilities at the start of the algorithm.

Note thatP(n)(·) can be viewed as a probabilistic analog representation of the map T(·). Hence the initial assignment at the beginning of the algorithm should attempt an initial guess at the solution. For this initial guess, one looks at the vertex attributes and assign vertices from G1 to vertices inG2 if they have the same attributes:

P(0)(Tvo =u) =P (Tvo =u|θvo, θu) (4.15)

If more than one match exists, one should choose a uniform distribution for all vertices having the same attributes. Hence:

P(0)(Tvo =u) =

(

0 if θvo 6=θu

1

N if θvi =θu

(4.16)

(45)

4.4

Analysis of the Belief Propagation Approach

and Limitations

As discussed, the original matching problem falls under the category of nonlinear optimization. Since the problem has been transformed from a discrete space maxi-mization to a continuous space maximaxi-mization, one has to keep in mind that even in the existence of a unique solution that maximizes the given cost function, the recursion may fail to converge. For most practical problems, however, probabilistic relaxation guarantees a good suboptimal solution, as a best compromise to the NP-hard com-plexity of an exact solution to the matching problem.

By transforming the problem into a continuous maximization through probabilis-tic relaxation, the functionP(n)(·) is introduced, which is a probability measure taking

continuous values. Throughout consecutive runs of the algorithm, this measure has to be constantly normalized to ensure that it represents a probability. The normaliza-tion is carried out along the rows ofP(n)(·) as shown in the denominator of Eq. (4.13).

This normalization along the rows follows the paradigm of assigning one of the BARG’s as a template, and the other as an object to be matched. The major limi-tation of this approach is the fact that it may map vertex atoms from G1 to a single

vertex inG2 [26, 22]. This many-to-one match may be required for some applications,

(46)

the likelihood of vertex a mapping to vertex f, the decision receives two supports from the neighboring vertices: once whendmaps toiand then whendmaps toj. As stated above, this happens because a belief network looks at the neighbors one a time. As a result the approach makes the erroneous decision that vertexamaps to vertexf.

A sample case which illustrates a potential problem is shown in Fig. 4.4, where a graph is matched to a replica of itself (the atoms are labeled in the figure for convenience). Applying the belief propagation-based algorithm yields the following match matrix at convergence:

P(n→∞)=

           

F G H I J

A 0.19 0 0.81 0 0

B 0 1 0 0 0

C 0 0 1 0 0

D 0 0 0 1 0

E 0 0 0 0 1

            (4.17)

According to this matrix, one would map both A andC toH, if the decision was made on the basis of highest probability. This can also be quantitatively explained by looking at Eq. (4.13). A simple computation will show that Q(T(A) = H) ≥

Q(T(A) = F), where Q(·) represents the received support from neighboring nodes. This indicates that A gets more support from its neighbors when it is assigned to H than when assigned toF. As discussed in the explanation above,B gets more “hits” (from G and I) when A is matched to H, rather than a single “hit” (from G) when A is matched to F.

(47)

adjacency information between two atoms (Au,uj). Since binary information looks at

(48)

a

b

c

d

e

f

g

h

i

j

G 1

G 2

a

d

f

g

h

i

j

3 M a tc h e s

S in g le M a tc h

(49)

a

b

c

d

e

f

g

h

i

j

G 1

G 2

(50)

C A tr ia n g le A d o t B d o t D tr ia n g le C lin e E G E tr ia n g le E d o t F d o t H tr ia n g le C lin e I B E D I H F G ra p h G 1 G ra p h G 2

(51)

4.5

Modified Probabilistic Relaxation Method

As mentioned, it is hard to adapt belief propagation to handle n-ary attributes. Instead of working withn-ary attributes, one may instead reconsider the initial defini-tion of the problem. We already mendefini-tioned that this belief propagadefini-tion-like approach assigns arbitrarily one graph as a template and the other as an object to be matched to the template. In other words, the approach investigates only a one-way match. The solution to this particular shortcoming resides in its ability to enforce a two-way matching constraint on the problem simultaneously.

We use an important result due to Sinkhorn [29] termed “alternative scaling” and which can be used for satisfying simultaneous two-way constraints for matrix algo-rithms [26, 30]. In brief, a doubly stochastic matrix is one where the sum of entries along any row or any column is equal to one. In other words, a one-to-one match matrix is a doubly stochastic matrix. The result of Sinkhorn shows that any matrix whose entries are all positive, can be made to converge to a doubly stochastic matrix by “alternatively scaling” it. In other words, the algorithm discussed above can be modified to enforce a two-way constraint by alternative scaling of the matrix at each step. This stands for alternatively normalizing rows and columns. We propose hence our modified probabilistic relaxation algorithm for graph matching (Algorithm 4).

We will discuss convergence criteria and implementational issues in the next chap-ter. The bottleneck in this algorithm is the update step which processes the neighbor-hood of every vertex. Carrying this for all vertices in both graphs, yields anO(E1E2)

(52)

Algorithm 4Modified Probabilistic Relaxation Assign initial probabilities:

P(0)(T(vi) = u) =

(

0 if θ(vi)6=θ(u)

1

N if θ(vi) =θ(u) repeat

Update probabilities:

P(n+1)(T(vi) =u) = P(n)(T(vi) =u)·Q(n)(T(vi) = u)

where Q(n)(T(vi) =u) =

Q

vj∈N(vi)

P

uj∈V2P

(n)(T(v

j) =uj)·Au,uj

Normalize along rows:

P(n+1)(T(vi) =u) =

P(n+1)(T(v

i) =u)

P

ui∈V2P

(n)(T(v

i) = ui)·Q(n)(T(vi) =ui) Normalize along columns:

P(n+1)(T(vi) = u) =

P(n+1)(T(vi) =u)

P

vj∈V1P

(n)(T(v

j) = u)·Q(n)(T(vj) = u) untilconvergence

(53)

Chapter 5

Implementation and Results

5.1

Implementation Details

We have implemented the proposed approach, including the decomposition and the modified probabilistic relaxation in Matlab. On that note, a few additional are in order: the representation of the decomposed atoms according to their structure and convergence criteria.

5.1.1

Atom Representation

When we discussed the matching of the atoms based on their unary attributes in the previous chapter, we mentioned that these unary attributes carried information about each atom’s structure. For illustrative purposes, we have only provided a de-scription of this structure in the previous chapter, by using labels such as: dot, line, triangle, etc.

(54)

set of eigenvalues. As is well known, the eigenvalues for similar matrices are identical:

det(A) = det(P AP−1) for any permutation matrix P.

Eigenvalues can generally be computed rapidly: this provides us with an efficient representation which does not depend on the permutation of the atom’s adjacency matrix. A detailed discussion of graph spectral properties may be found in [31].

It is important to recall for this context that non-isomorphic graphs may display the same spectral properties, and these are referred to in the literature as cospectral graphs [32]. Fig. 5.1 shows two graphs with cospectral adjacency matrices, and Ta-ble 5.1 shows the number of cospectral graphs and their fraction of the total number of graphs for graphs of various sizes [32]. For completeness, we should note that in the table, the fraction of cospectral graphs shown, is an overestimate of the po-tential number of problem cases that our modified probabilistic relaxation may run into. The problem cases for the approach occur only when two non-isomorphic atoms are cospectral. The numbers shown in the table account for all possible cospectral graphs, many of which may never be encountered as atoms. Recall that an atom is a non-separable component, while many of the cospectral graphs do not satisfy this property. It would be interesting to search for the number of cospectral graphs that may be encountered as atoms, thus causing potential problems for our approach.

(55)

Table 5.1: Number and fraction of cospectral graphs n=|V| # of graphs # of cospectral graphs fraction

2 2 0 0

3 4 0 0

4 11 0 0

5 34 2 0.059

6 156 10 0.064

7 1044 110 0.105

8 12346 1722 0.139

9 274668 51038 0.186

10 12005168 2560516 0.213

11 1018997864 215331677 0.211

5.1.2

Convergence Criteria

We have not discussed yet the convergence criteria we use for the algorithm. We choose the following three:

• Stop the algorithm when a predefined maximum number of iterations is reached.

• Stop the algorithm when the sum of all rows and all columns ofP(n→∞) is close

to one:

X

i

pij >1− or

X

j

pij >1−

• Stop the algorithm when the updated probabilities in the current iteration do not differ from the probabilities in the previous iteration:

|pnij −pnij−1|< δ (5.1)

For our implementation, we chose the maximum number of iterations to be 100, = 0.1 and δ = 0.01.

5.2

Results

(56)

can arise when applying probabilistic relaxation. In order to decide which atoms of V1 should be assigned to those in V2, one needs P(n→∞) to converge to a match

matrix, in other words to a doubly stochastic matrix. Hence, our criterion for a “match” will be the following: if the modified probabilistic relaxation converges to a match matrix (one whose rows and columns sum up to one), then the two graphs are

declared isomorphic. We look at the following three different cases that may arise in

practical scenarios. Consider, first, matching two isomorphic graphs with no element of symmetry. This notion of symmetry is loosely defined, however it will become more clear in the second case. Consider matching the graph in Fig. 5.2 to its replica. This is the same matching problem for which unmodified probabilistic relaxation gave the wrong solution in Chapter 4. After convergence, we get:

P(n→∞) =

           

F G H I J

A 1 0 0 0 0

B 0 1 0 0 0

C 0 0 1 0 0

D 0 0 0 1 0

E 0 0 0 0 1

           

which is a doubly stochastic matrix. Hence the approach’s decision would be that the two decomposed graphs match in agreement with our expectations. Notice that individual atoms match correctly.

(57)

C

A B D E F G H I J

G ra p h G1 G ra p h G2

H

F G I J

G ra p h G2

H

F G I J

G ra p h G2

H

F G I J

G ra p h G2

Figure 5.2: Matching two isomorphic graphs.

approach converges to the following matrix:

P(n→∞)=

              

G H I J K L

A 0.5 0 0.5 0 0 0

B 0 1 0 0 0 0

C 0.5 0 0.5 0 0 0

D 0 0 0 1 0 0

E 0 0 0 0 1 0

F 0 0 0 0 0 1

              

which is also a doubly stochastic matrix. Hence the decision is that the two decom-posed graphs match. The assignment is not a one-to-one, but a many-to-many match which takes into account symmetry factors in the graph. Since the original graphs are both symmetric inA andC (or Gand I), the match matrix conveys this information with a uniform assignment for all potential matches.

(58)

A E C B D F

G ra p h G 1

G J I H K L

G ra p h G 2

Figure 5.3: Matching two isomorphic graphs having an element of symmetry.

them is different. The modified probabilistic relaxation converges to:

P(n→∞) =

              

G H I J K L

A 0.5 0 0.167 0 0.5 0

B 0 0.5 0 0.5 0 0

C 0.5 0 0.167 0 0.5 0

D 0 0.5 0 0.5 0 0

E 0 0 0.66 0 0 0

F 0 0 0 0 0 1

              

(59)

conclusion, the modified probabilistic relaxation converges to a match matrix when there is a high probability for the graphs to be isomorphic. The elements of the match matrix indicate then the correspondence of the atoms from the two graphs. On the other hand, the modified probabilistic relaxation fails to converge to a match matrix when the graphs are not isomorphic.

G ra p h G 1 G ra p h G 2

A

E

C B

D F

G

J I

H

K L

Figure 5.4: Matching two non-isomorphic graphs.

(60)

detect graph isomorphism whenever present. In addition, our approach was able to match successfully corresponding atoms in the two graphs. Fig. 5.5 shows one illustrative example. At conversion, the resulting match matrix is:

P(n→∞) =

                                                  

a b c d e f g h i j k l m n o p q r s

A 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

B 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

C 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

D 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

E 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0

F 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0

G 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0

H 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0

I 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0

J 0 0 0 0 0 0 0 0 0 0.5 0.5 0 0 0 0 0 0 0 0 K 0 0 0 0 0 0 0 0 0 0.5 0.5 0 0 0 0 0 0 0 0

L 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0

M 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0

N 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0

O 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0

P 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0

Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0

R 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0

S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

                                                  

which indicates a match between the two graphs, and maps corresponding atoms. When a random noise is applied to the graph, the algorithm does not converge to a match matrix, even when the noise preserves the same number of atoms and their structure (see Fig. 5.6, where one edge is moved from one “triangle” in the graph to another).

(61)

approach yields good results. More tests are required on larger graphs and these are left for future work. A larger graph database is also needed in order to be able to identify potential cases for which our proposed approach fails. It would be more meaningful, however, to tailor this approach for specific applications, namely where we encounter graphs with known constraints, such as topological constraints [1, 2, 3].

A B C D E F G H I J K L M N O P Q R S a b c d e f g h i j k l m n o p q r s

(62)

A

B

C

D

E

F

G

H

I

J

K

L

M

N

O

P

Q

R

S

a

b

c

d

e

f

g

h

i

j

k

l

m

n

o

p

q

r

s

(63)

5.3

Conclusion

Throughout this thesis, we have presented a novel approach for the graph match-ing problem with good suboptimal results in polynomial time, hence overcommatch-ing the NP-hard complexity of the graph/subgraph isomorphism problem. The results were promising as expected. The first step is to decompose a graph uniquely into a set of non-separable atoms. Through a belief propagation-like approach, the atoms of two decomposed graphs are then matched while enforcing a two-way match constraint. As concluding remarks, we mention some extensions of the work presented here that is left for future research.

Although our implementation focused on the graph isomorphism problem and the results were generated accordingly, our approach may be slightly modified to tackle subgraph isomorphism. The decomposition of a graph into components and matching corresponding components in different graphs is exactly that of subgraph isomorphism. Our approach, however, yields correct results only when the subgraph decomposes identically to the graph in which it can be embedded. This is illustrated in Fig. 5.7. Although both G2 and G3 are full subgraphs of G1, our approach only

points to G2. For G3, the approach leaves out the line atom, only because it cannot

be embedded in the BARG generated from G1. A correct solution to the subgraph

isomorphism problem is hence found only if the two generated BARG’s contain the same subgraphs.

(64)

achieving minimal cost [33].

(65)

G

ra

p

h

G

1

G

ra

p

h

G

2

G

ra

p

h

G

3

(66)

Bibliography

[1] D. Aouada and H. Krim. 3D Object recognition using fully intrinsic skeletal models. SPIE Electronic Imaging Proceedings, 6814(8), 2008.

[2] D. Aouada and H. Krim. Intrinsic and complete modeling of 3D shapes using curved-reeb graphs. IEEE transactions on Image Processing (under review).

[3] D. Aouada, D. Dreisigmeyer, and H. Krim. Object geometric modeling by whit-ney embedding of iso-geodesic curves. IEEE conference on Computer Vision and Pattern Recognition. CVPR’08 (under review).

[4] S. H. Baloch and H. Krim. 3d object representation with topo-geometric shape models. Proceedings of EUSIPCO’05, 2005.

[5] A. Hamza and H. Krim. Geodesic object representation and recognition. 2003.

[6] Narsingh Deo. Graph Theory with Applications to Engineering and Computer Science. Prentice Hall, 1974.

[7] H. Whitney. Non-separable and planar graphs. Transactions of the American Mathematical Society, 34(2):339–362, 1932.

[8] C.G. Lekkerkerker and J.C. Boland. Representation of a finite graph by a set of intervals on real line. Fundamenta Mathematicae, (51), 1962.

[9] A. Berry and J. Bordat. Separability generalizes dirac’s theorem. 1998.

(67)

[11] A. Berry and J. Bordat. Decomposition by clique minimal separators. 1997.

[12] Donald J. Rose and R. Endre Tarjan. Algorithmic aspects of vertex elimination. pages 245–254, 1975.

[13] Donald J. Rose and Robert Endre Tarjan. Algorithmic aspects of vertex elimina-tion on directed graphs. SIAM Journal on Applied Mathematics, 34(1):176–197, 1978.

[14] Robert E. Tarjan. Graph theory and gaussian elimination. 1975.

[15] Donald Rose. A graph theoretic study of the numerical solution of sparse positive definite systems of linear equations. pages 183–217.

[16] D. R. Fulkersson and O. A. Gross. Incidence matrices and interval graphs.Pacific Journal of Mathematics, 15(3), 1965.

[17] Anne Berry, Jean R. S. Blair, and Pinar Heggernes. Maximum cardinality search.

[18] Yngve Villanger. Lex m versus mcs-m. 2006.

[19] R. E. Tarjan. Decomposition by clique separators. Discrete Mathematics, (55):221–232, 1985.

[20] A. Berry. Graph extremities and minimal separation. 2003.

[21] Terry Caelli and Tiberio Caetano. Graphical models for graph matching: ap-proximate models and optimal algorithms. Pattern Recogn. Lett., 26(3):339–346, 2005.

[22] D. Conte, P. Foggia, C. Sansone, and M. Vento. Thirty years of graph matching in pattern recognition. International Journal of Pattern Recognition and Artificial Intelligence, 2004.

(68)

[24] J Pearl. Fusion, propagation, and structuring in belief networks. Artif. Intell., 29(3):241–288, 1986.

[25] Robert J. McEliece, David J. C. MacKay, and Jung-Fu Cheng. Turbo decoding as an instance of pearls belief propagation algorithm. IEEE Journal on Selected Areas in Communications, 16(2), 1998.

[26] Steven Gold and Anand Rangarajan. A graduated assignment algorithm for graph matching. IEEE Transactions on Pattern Analysis and Machine Intelli-gence, 18(4):377–388, 1996.

[27] Ingemar J. Cox, Matt L. Miller, Thomas P. Minka, Thomas V. Papathomas, and Peter N. Yianilos. The bayesian image retrieval system, pichunter: Theory, implementation, and psychophysical experiments. IEEE Transactions on Image Processing, 9(1), 2000.

[28] Kwok-Wai Cheung, Dit-Yan Yeung, and Roland T. Chin. A bayesian framework for deformable pattern recognition with application to handwritten character recognition. IEEE Transactions on Pattern Analysis and Machine Learning, 20(12), 1998.

[29] Richard Sinkhorn. A relationship between arbitrary positive matrices and doubly stochastic matrices. The Annals of Mathematical Statististics, 35(2), 1964.

[30] Anand Rangarajan, Steven Gold, and Eric Mjolsness. A novel optimizing network architecture with applications, 1996.

[31] Fan R. K. Chung. Spectral Graph Theory. American Mathematical Society, 1997.

[32] Willem H. Haemersa and Edward Spenceb. Enumeration of cospectral graphs. European Journal of Combinatorics, (25):199–211, 2004.

References

Related documents

Subsequently, to develop procedures which display good size and power properties regardless of the degree of persistence and endogeneity, we have also proposed tests based on

5 B, in comparison to the best- fi t si- mulated FC, the simulated data at maximal metastability show reduced relative error for global and local ef fi ciency, clustering coef fi

Copyright © 2019 The Authors. Cochrane Database of Systematic Reviews published by John Wiley &amp; Sons, Ltd. on behalf of The Cochrane Collaboration... 05) 118 Albendazole alone or

(In the study districts, all HEWs are female and all community leaders male. Supervisors are predominantly male. Disaggregating by gender and district would breech

If you’re getting to Milan by railway, you will probably arrive at Milano Centrale, the main train station, located just six underground stops from the city center.. You can buy

In each period, a firm chooses its quality-upgrade level to increase its product quality for the next period.. The firm also decides each period whether or not

Facilitating joint attention with salient pointing in interactions involving children with autism spectrum disorder.. Katja Dindar, University of Eastern Finland,

Based on the perception, the organization will identify the right approach towards the BPR implementation and develop relevant appropriate plans for smooth and