Top PDF Semi-transitive orientations and word-representable graphs

Semi-transitive orientations and word-representable graphs

Semi-transitive orientations and word-representable graphs

Organization of the paper. The paper is organized as follows. In Sec- tion 2, we give definitions of objects of interest and review some of the known results. In Section 3, we give a characterization of word-representable graphs in terms of orientations and discuss some important corollaries of this fact. In Section 4, we examine the representation number, and show that it is always at most 2n − 4, but can be as much as n/2. We explore, in Section 5, which classes of graphs are word-representable, and show, in particular, that 3-colorable graphs are such graphs, but numerous other properties are inde- pendent from the property of being word-representable. Finally, we conclude with two open problems in Section 6.
Show more

14 Read more

A comprehensive introduction to the theory of word-representable graphs

A comprehensive introduction to the theory of word-representable graphs

If we change “3-representable” by “word-representable” in Theo- rem 12 we would obtain a weaker, but clearly still true statement, which is not hard to prove directly via semi-transitive orientations. In- deed, each path of length at least 3 added instead of an edge e can be oriented in a “blocking” way, so that there would be no directed path between e’s endpoints. Thus, edge subdivision does not preserve the property of being non-word-representable. The following theorem shows that edge subdivision may be preserved on some subclasses of word-representable graphs, but not on the others.
Show more

37 Read more

Word-representability of triangulations of grid-covered cylinder graphs

Word-representability of triangulations of grid-covered cylinder graphs

The base for our inductive proof is the three graphs, M , N and P in Figure 4.14, which are the only non-isomorphic triangulations of the GC- CG on nine vertices containing no graph in Figure 4.11 as an induced sub- graph. Each of these graphs can be semi-transitively oriented as shown in Figure 4.15. Note that we provide two semi-transitive orientations for the graphs N and P , which is essential in our inductive argument. Also, note that each triangulation of a GCCG with three sectors on less than nine ver- tices can be oriented semi-transitively (just remove the external layer in the graphs in Figure 4.15).
Show more

19 Read more

Word Representability of Line Graphs

Word Representability of Line Graphs

Although almost all graphs are non-representable (as discussed in [1]) and even though a criteria in terms of semi-transitive orientations is given in [5] for a graph to be representable, essentially only two explicit construc- tions of non-representable graphs are known. Apart from the so-called graph whose non-representability is proved in [2] in connection with solving an open problem in [1], the known constructions of non-re- presentable graphs can be described as follows. Note that the property of being representable is hereditary, i.e., it is inherited by all induced subgraphs, thus adding addi- tional nodes to a non-representable graph and connecting them in an arbitrary way to the original nodes will also result in a non-representable graph.
Show more

6 Read more

Solving computational problems in the theory of word representable graphs

Solving computational problems in the theory of word representable graphs

we raise some concerns about Conjecture 7, while confirming it for graphs on at most 9 vertices. In Section 3 we present a complementary computational approach using constraint programming, enabling us count connected non-word-representable graphs. In particular, in Section 3 we report that using 3 years of CPU time, we found out that 64.65% of all connected graphs on 11 vertices are non-word-representable. Another important corollary of our results in Section 3 is the correction of the published result [19, 20] on the number of connected non- word-representable graphs on 9 vertices (see Table 2). In Section 4 we introduce the notion of a k-semi-transitive orientation refining the notion of a semi-transitive orientation, and show that 3-semi-transitively orientable graphs are not necessarily semi-transitively orientable. Finally, in Section 5 we suggest a few directions for further research and experimentation.
Show more

18 Read more

Solving computational problems in the theory of word-representable graphs

Solving computational problems in the theory of word-representable graphs

we raise some concerns about Conjecture 7, while confirming it for graphs on at most 9 vertices. In Section 3 we present a complementary computational approach using constraint programming, enabling us count connected non-word-representable graphs. In particular, in Section 3 we report that using 3 years of CPU time, we found out that 64.65% of all connected graphs on 11 vertices are non-word-representable. Another important corollary of our results in Section 3 is the correction of the published result [19, 20] on the number of connected non- word-representable graphs on 9 vertices (see Table 2). In Section 4 we introduce the notion of a k-semi-transitive orientation refining the notion of a semi-transitive orientation, and show that 3-semi-transitively orientable graphs are not necessarily semi-transitively orientable. Finally, in Section 5 we suggest a few directions for further research and experimentation.
Show more

18 Read more

Word-representability of face subdivisions of triangular grid graphs

Word-representability of face subdivisions of triangular grid graphs

Recently, Akrobotu at el. [1] proved that a triangulation of the graph G associated with a convex polyomino is word-representable if and only if G is 3-colorable. Inspired by this elegant result, in the paper in hands, we characterized word-representable face subdivisions of triangular grid graphs. The paper is organized as follows. In Section 2 some necessary defini- tions, notation and known results are given. In Section 3 we state and prove our main result (Theorem 3.1) saying that a face subdivision of a triangular grid graph is word-representable if and only if it has no interior cell subdi- vided. Theorem 3.1 is proved using the notion of a smart (semi-transitive) orientation introduced in this paper. Finally, in Section 4 we apply our main result to face subdivisions of triangular grid graphs having equilateral triangle shape and face subdivisions of the Sierpi´ nski gasket graph.
Show more

17 Read more

New results on word-representable graphs

New results on word-representable graphs

Recently, a number of fundamental results on word-representable graphs were obtained in the literature [8, 9, 11, 12, 13]. In particular, Halldórsson et al. [9] have shown that a graph is word- representable if and only if it admits a semi-transitive orientation. However, our knowledge on these graphs is still very limited and many important questions remain open. For example, how hard is it to decide whether a given graph is word-representable or not? What is the minimum length of a word that represents a given graph? How many word-representable graphs on n vertices are there? Does this family include all graphs of vertex degree at most 4?
Show more

8 Read more

Improving Word Alignment by Semi Supervised Ensemble

Improving Word Alignment by Semi Supervised Ensemble

them, as described in Section 2.1. We use the sym- metrized IBM Model4 results by the grow-diag- final-and heuristic as our baseline (Model4GDF). Scores in Table 2 show the great improvement of supervised learning, which reduce the align- ment error rate significantly (more than 5% AER points from the best sub-model, i.e. Berke- leyAligner). This result is consistent with Ayan and Dorr (2006)’s experiments. It is quite reason- able that supervised model achieves a much higher classification accuracy of 0.8124 than any unsu- pervised sub-model. Besides, it also achieves the highest recall of correct alignment links (0.7027). 4.3 Experiments of Semi-supervised Models We present our experiment results on semi- supervised models in Table 3. The two strategies of generating initial classifiers are compared. Tri- Bootstrap is the model using the original boot- strap sampling initialization; and Tri-Divide is the model using the dividing initialization as de- scribed in Section 3.2. Items with superscripts 0 indicate models before the first iteration, i.e. ini- tial models. The scores of BerkeleyAligner and the supervised model are also included for com- parison.
Show more

9 Read more

Semi Supervised Training for Statistical Word Alignment

Semi Supervised Training for Statistical Word Alignment

Another important difference with previous work is that we are concerned with generating many-to-many word alignments. Cherry and Lin (2003) and Taskar et al. (2005) compared their re- sults with Model 4 using “intersection” by look- ing at AER (with the “Sure” versus “Possible” link distinction), and restricted themselves to consider- ing 1-to-1 alignments. However, “union” and “re- fined” alignments, which are many-to-many, are what are used to build competitive phrasal SMT systems, because “intersection” performs poorly, despite having been shown to have the best AER scores for the French/English corpus we are using (Och and Ney, 2003). (Fraser and Marcu, 2006) recently found serious problems with AER both empirically and analytically, which explains why optimizing AER frequently results in poor ma- chine translation performance.
Show more

8 Read more

EMDC: A Semi supervised Approach for Word Alignment

EMDC: A Semi supervised Approach for Word Alignment

The IBM Models (Brown et al., 1993) are a series of generative models for word alignment. GIZA++ (Och and Ney, 2003), the most widely used implementation of IBM models and HMM (Vogel et al., 1996), employs EM algorithm to es- timate the model parameters. For simpler models such as Model 1 and Model 2, it is possible to obtain sufficient statistics from all possible align- ments in the E-step. However, for fertility-based models such as Models 3, 4, and 5, enumerating all possible alignments is NP-complete. To over- come this limitation, GIZA++ adopts a greedy hill-climbing algorithm, which uses simpler mod- els such as HMM or Model 2 to generate a “center alignment” and then tries to find better alignments among its neighbors. The neighbors of an align- ment a J 1 = [a 1 , a 2 , · · · , a J ] with a j ∈ [0, I] are
Show more

9 Read more

Semi supervised Chinese Word Segmentation for CLP2012

Semi supervised Chinese Word Segmentation for CLP2012

In this paper, we report our work on CLP2012 Mi- cro-blog word segmentation subtask. Specific to the characteristics of short text, we design our system in three steps. First, we train a statistical model to mainly segment known words. Then, we utilize an unsupervised segmentation method to indentify unknown words. Third, for the words beyond knowledge of the training data, we employed a dictionary based approach. Generally, our system design is easy to implement and presents good segmentation results.

6 Read more

Semi supervised Word Alignment with Mechanical Turk

Semi supervised Word Alignment with Mechanical Turk

IBM Models (Brown et. al., 1993) are a series of gen- erative models for word alignment. GIZA++ (Och and Ney, 2003) is the most widely used implementation of IBM models and HMM (Vogel et al., 1996) where EM algorithm is employed to estimate the model parameters. In the E-step, it is possible to obtain sufficient statistics from all possible alignments for simple models such as Model 1 and Model 2. Meanwhile for fertility-based models such as Model 3, 4, 5, enumerating all possible alignments is NP-complete. In practice, we use sim- pler models such as HMM or Model 2 to generate a “center alignment” and then try to find better alignments among the neighbors of it. The neighbors of an alignment a J 1 = [a 1 , a 2 , · · · , a J ], a j ∈ [0, I] is defined as align-
Show more

5 Read more

Nonparametric Bayesian Semi supervised Word Segmentation

Nonparametric Bayesian Semi supervised Word Segmentation

In this paper, we presented a hybrid genera- tive/discriminative model of word segmentation, leveraging a nonparametric Bayesian model for un- supervised segmentation. By combining CRF and NPYLM within the semi-supervised framework of JESS-CM, our NPYCRF not only works as well as the state-of-the-art neural segmentation without hand tuning of hyperparameters on standard cor- pora, but also appropriately segments non-standard texts found in Twitter and Weibo, for example, by automatically finding “new words” thanks to a non- parametric model of infinite vocabulary.

12 Read more

Semi automatic Annotation of Chinese Word Structure

Semi automatic Annotation of Chinese Word Structure

the E-step of the EM algorithm. Since the EM algorithm runs on GMM, from now on, POS fin- gerprint features represent the words instead. The following points may explain why it can improve the performance: (1) Even though the best testing accuracy with "hard assignment" given by the ME model is only 81%, the "true" structure anal- yses may still exist as the top-k candidate with relatively large probabilities, while irrelevant ones may have only small probability mass. (2) In general the assignments that EM induce do not necessarily correspond to the desired classifi- cation tags, but the ME outputs can give the EM a better starting point to move towards the right one among all possible local optima, given the data likelihood and the classification accuracy are well correlated. (3) From the perspective of the original ME model, the connections and similarities between data points from a much bigger sample (21151 vs. 500) may help fix the high variance problem discussed in section 4.3. The final soft assignments for the 100 testing words are obtained by applying the E-step for them with the parameter estimated in previous iterations. To get the hard assignment, we simply select the assignment with the highest probability for each word. The evaluation for the hard as- signments is still based on testing accuracy, which stays at 90% in multiple runs that we have tried.
Show more

9 Read more

Automorphism Groups Of Weakly Semi-Regular Bipartite  Graphs

Automorphism Groups Of Weakly Semi-Regular Bipartite Graphs

The isomorphism and automorphism of graphs are largely used in data structure for database retrieval and in cryptography etc. In the study of graph parameters, the graph isomorphism and graph automorphisms have a big role. A study of direct product and uniqueness of automorphism groups of graphs was done by W.Peisert [13]. A characterization of automorphism groups of generalized Hamming graphs was done by F.A Chaouche and A.Berrachedi [4]. Here we considered mainly two families of graphs - SM sum graphs and SM balancing graphs. SM sum graphs are related to the intrinsic connection between the powers of 2 and the natural numbers which is the basic logic of binary number system while the SM balancing graphs are associated with the balanced ternary number system which was used in the SETUN computer made in Russia. These graphs are vertex labelled graphs and are explained in the next section.
Show more

5 Read more

Semi-Supervised Multi-Task Word Embeddings

Semi-Supervised Multi-Task Word Embeddings

Table 3 shows the results of transferring the learned semi- supervised multi-task learning (SS-MTL) embeddings to analogy tasks. Here, we analyze (1) whether the word meta- embeddings carry over to analogy even if not all embedding algorithms preserve analogy relations and (2) check if the similarity encoded with SS-MTL has any effect on perfor- mance on analogy. We find that in general, semi-supervised MTL that incorporates similarity scores has some transfer- ability to analogy, at least based on the scores provided by the aforementioned word similarity datasets. Using word similarity scores for supervision is a general measure of sim- ilarity, whereas analogy relations are more specific, hence it is not surprising that the difference in performance is slight. However, for Google Analogy, the larger of the three datasets with the smallest range of relation types, we find that the SS-MTL model that previously trained with Cosine- Brier’s loss functions shows the best performance overall. This is consistent with findings from Table 2 where the same model performs best over 4 of 6 word similarity datasets. This suggests that performing additional nonlinear meta- word encoding somewhat preserves the linear structures pre- served in models such as skipgram and fasttext. Addition- ally, it remains clear that Brier’s score (i.e ` 2 for classifica-
Show more

9 Read more

The equational theories of representable residuated semigroups

The equational theories of representable residuated semigroups

Remark 2.3. The reader may wonder whether there is a finite axiomatiza- tion for the quasivariety R(·, ;, \, /) of representable algebras. The problem with representing an arbitrary algebra B satisfying the axioms is as fol- lows. Assume that a \ a ≤ b ; c in B, for some elements b, c that are not in E, and we are in a step-by-step construction dealing with composition for a \ a ∈ ` α (u, u). Then we need v such that b ∈ ` α+1 (u, v) and c ∈ ` α+1 (v, u).

8 Read more

Language classification from bilingual word embedding graphs

Language classification from bilingual word embedding graphs

how bilingual word embeddings can be employed for the task of semantic (Q2) language classification. Our approach here is simple: we project languages onto a common pivot language p so as to make them comparable. We directly use bilingual word embeddings for this. More precisely, we first project lan- guages ` in a common semantic space with the pivot p by means of bilingual word embedding methods. Subsequently, we ignore language ` words in the joint space. Semantic distance measurement between languages then amounts to comparison of graphs that have the same nodes — pivot language words — and different edge weights — semantic similarity scores between pivot language words based on bilin- gual embeddings that vary as a function of the second language ` involved. This core idea is illustrated in Figures 1 and 2.
Show more

12 Read more

A Study on Cayley Graphs of Factorizable Inverse Semi groups

A Study on Cayley Graphs of Factorizable Inverse Semi groups

The Cayley graph of groups was introduced by Arthur Cayley in 1878 and the Cayley graphs of groups have received serious attention since then. The Cayley graphs of semigroups are generalizations of Cayley graphs of groups. The whole section 2.4 of the book [7] is devoted to Cayley graphs of semigroups. In 1964, Bosak [2] and in 1981, Zelinka [9] studied certain graphs over semigroups .Recently in 2006, Kelarev [6] studied on Cayley graphs of inverse semigroups. The concept of Factorizable inverse semigroup has been given Chen and Hsieh in [3].The Green's relations play an important role in the theory of semigroups. In this paper, we study the Cayley graphs of Factorizable inverse semigroups relative to Green's equivalence Classes.
Show more

7 Read more

Show all 10000 documents...