word-representable graphs

Top PDF word-representable graphs:

New results on word-representable graphs

New results on word-representable graphs

A graph G = (V, E) is word-representable if there exists a word w over the alphabet V such that letters x and y alternate in w if and only if (x, y) ∈ E for each x 6= y . The set of word-representable graphs generalizes several important and well-studied graph families, such as circle graphs, compa- rability graphs, 3-colorable graphs, graphs of vertex degree at most 3, etc. By answering an open question from [9], in the present paper we show that not all graphs of vertex degree at most 4 are word-representable. Combining this result with some previously known facts, we derive that the number of n -vertex word-representable graphs is 2 n 3 2 +o(n
Show more

8 Read more

Solving computational problems in the theory of word-representable graphs

Solving computational problems in the theory of word-representable graphs

It is also interesting to identify minimal non-word-representable graphs of each size, i.e. graphs containing no non-word-representable strict induced subgraphs. To do this, we stored all non-word-representable graphs of each size. After computing with geng all possible graphs with one more vertex, we eliminate graphs containing one of the stored graphs as an induced subgraph. We did this with a simple constraint model which tries to find a mapping from the vertices of the induced subgraph to the vertices of the larger graph, and if successful discards the larger graph from consideration. This enabled us to count all minimal non- word-representable graphs of each size up to 9, which is shown in Table 2. The filtering process we used was too inefficient to complete the cases n ≥ 10.
Show more

18 Read more

Semi-transitive orientations and word-representable graphs

Semi-transitive orientations and word-representable graphs

Circle graphs. Circle graphs are those whose vertices can be rep- resented as chords on a circle in such a way that two nodes in the graph are adjacent if and only if the corresponding chords overlap. Assigning a letter to each chord and listing the letters in the order they appear along the circle, one obtains a word where each letter appears twice and two nodes are adjacent if and only if the letter occurrences alternate [4]. Therefore, circle graphs are the same as 2-word-representable graphs.

14 Read more

Solving computational problems in the theory of word representable graphs

Solving computational problems in the theory of word representable graphs

It is also interesting to identify minimal non-word-representable graphs of each size, i.e. graphs containing no non-word-representable strict induced subgraphs. To do this, we stored all non-word-representable graphs of each size. After computing with geng all possible graphs with one more vertex, we eliminate graphs containing one of the stored graphs as an induced subgraph. We did this with a simple constraint model which tries to find a mapping from the vertices of the induced subgraph to the vertices of the larger graph, and if successful discards the larger graph from consideration. This enabled us to count all minimal non- word-representable graphs of each size up to 9, which is shown in Table 2. The filtering process we used was too inefficient to complete the cases n ≥ 10.
Show more

18 Read more

A comprehensive introduction to the theory of word-representable graphs

A comprehensive introduction to the theory of word-representable graphs

If we change “3-representable” by “word-representable” in Theo- rem 12 we would obtain a weaker, but clearly still true statement, which is not hard to prove directly via semi-transitive orientations. In- deed, each path of length at least 3 added instead of an edge e can be oriented in a “blocking” way, so that there would be no directed path between e’s endpoints. Thus, edge subdivision does not preserve the property of being non-word-representable. The following theorem shows that edge subdivision may be preserved on some subclasses of word-representable graphs, but not on the others.
Show more

37 Read more

Representing graphs via pattern avoiding words

Representing graphs via pattern avoiding words

7. Other notions of word-representable graphs As it is mentioned in Section 2, apart from our main generalization, given in Definition 1, of the notion of a word-representable graph, we have another generalization given in Definition 5 below. In this section, we also state some other ways to define the notion of a (directed) graph representable by words. Our definitions can be generalized to the case of hypergraphs by simply allowing words defining edges/non-edges be over alphabets containing more than two letters. However, the focus of this paper was studying 12-representable graphs, so we leave all the notions introduced below for a later day to study.
Show more

23 Read more

Existence of μ-representation of graphs

Existence of μ-representation of graphs

Jones et al. have shown that any graph is 11 · · · 1-representable assuming that the number of 1s is at least three, while the class of 12-rerpesentable graphs is properly contained in the class of compa- rability graphs, which, in turn, is properly contained in the class of word-representable graphs corresponding to 11-representable graphs. Further studies in this direction were conducted by Nabawanda, who has shown, in particular, that the class of 112-representable graphs is not included in the class of word-representable graphs.
Show more

9 Read more

Word-representability of face subdivisions of triangular grid graphs

Word-representability of face subdivisions of triangular grid graphs

Recently, a number of (fundamental) results on word-representable graphs were obtained in the literature; for example, see [1], [3], [5], [7], [9], [11], and [12]. In particular, Halld´ orsson et al. [7] have shown that a graph is word- representable if and only if it admits a semi-transitive orientation (to be de- fined in Section 2), which, among other important corollaries, implies that all 3-colorable graphs are word-representable. The theory of word-representable graphs is the main subject of the upcoming book [8].
Show more

17 Read more

Word-representability of triangulations of grid-covered cylinder graphs

Word-representability of triangulations of grid-covered cylinder graphs

Our proof is organized as follows. In Subsection 4.1 we will provide all six minimum non-word-representable graphs that can appear in triangulations of GCCGs with three sectors (see Figure 4.11) and give an explicit proof that one of these graphs is non-word-representable. Then, in Subsection 4.2, we will give an inductive argument showing that avoidance of the six graphs in Figure 4.11 is a sufficient condition for a GCCG with three sectors to be word-representable. Note that the graphs in Figure 4.11 were obtained by an exhaustive computer search on graphs on up to eight vertices. However, our argument in Subsection 4.2 will show that no other non-word-representable induced subgraphs can be found among all triangulations of GCCGs with three sectors.
Show more

19 Read more

On k-11-representable graphs

On k-11-representable graphs

Motivation points to study word-representable graphs include the fact exposed in [10] that these graphs generalize several important classes of graphs such as circle graphs [3], 3-colourable graphs and comparability graphs [14]. Relevance of word-representable graphs to scheduling problems was explained in [7] and it was based on [6]. Furthermore, the study of word-representable graphs is interesting from an algorithmic point of view as ex- plained in [10]. For example, the Maximum Clique problem is polynomially solvable on word-representable graphs [10] while this problem is generally NP-complete [2]. Finally, word-representable graphs are an important class among other graph classes considered in the literature that are defined using words. Examples of other such classes of graphs are polygon-circle graphs [13] and word-digraphs [1].
Show more

19 Read more

Can Syntactic and Logical Graphs help Word Sense Disambiguation?

Can Syntactic and Logical Graphs help Word Sense Disambiguation?

Local syntactic graphs perform better than logical graphs in Table 12. However, when we run other experiments using Frequency of co-occurrences, local logical graphs gave superior performance in comparison with other contexts (Sentence context window, syntactic graphs). We noted two issues in the Senseval experiment. On one side, the syntactic parsing was very noisy, due to improper punctuation (see the example above) and syntactic errors in the dependency parsing. The syntactic links between the words had many erroneous or high-level underspecified dependencies. On the other side, we obtained very incomplete logical representations using our logical analyzer on this set of data. As previously said, this might indicate that the analyzer may not cover enough syntactic structures, but we also noticed that noisy syntactic relationships had a big impact over our analyzer, which looks for specific well-defined syntactic structures. Despite these two issues, local syntactic and logical graphs contexts raised the performance of some WSD algorithms. Improving the syntactic and logical analysis might have an impact over the accuracy of WSD. This also raises the question of the kind of corpora that should be made available to the research community which is committed to full parsing for WSD. Obtaining better quality texts is probably one of the requirements of such approaches. Further work will tackle the enhancement of the logical analyzer, but also the manual definition of logical contexts in order to avoid any impact of bad syntactic and logical analyses on the WSD and to test our logical contexts on clean and non-noisy data.
Show more

7 Read more

Co-occurrence graphs for word sense disambiguation in the biomedical domain

Co-occurrence graphs for word sense disambiguation in the biomedical domain

Regardless of whether we refer to general or specific domains, such as the biomedical one, it is commonly accepted in the literature [5, 6, 1] that most WSD algorithms fall into one of the fol- lowing categories: techniques that need labelled training data, and knowledge-based techniques. The first category, also called supervised techniques, usually applies machine learning (ML) al- gorithms to labelled data to develop a model, based on features extracted from the context of the ambiguous words. The development of these features requires a comprehensive understanding of the problem being addressed [7]. We can find many different studies which address general WSD under this supervised point of view, through the use of classical machine learning algorithms [8], and in the last few years also adapting new techniques such as word embeddings [9]. When it comes to the biomedical domain, many works also belong to this category, making use of different ML approaches to address the problem [10, 11, 12, 13], although the bottleneck caused by the scarcity of labelled resources remains a major problem. Other semi-supervised works attempt to relieve this issue by introducing “pseudo-data” to the training examples [14, 15].
Show more

22 Read more

Multi Sentence Compression: Finding Shortest Paths in Word Graphs

Multi Sentence Compression: Finding Shortest Paths in Word Graphs

What properties are characteristic of a good com- pression? It should neither be too long, nor too short. It should go through the nodes which rep- resent important concepts but should not pass the same node several times. It should correspond to a likely word sequence. To satisfy these constraints we invert edge weights, i.e., link frequencies, and search for the shortest path (i.e., lightest in terms of the edge weights) from start to end of a pre- defined minimum length. This path is likely to mention salient words from the input and put to- gether words found next to each other in many sentences. This is the first method we consider. We set a minimum path length (in words) to eight which appeared to be a reasonable threshold on a development set–paths shorter than seven words were often incomplete sentences.
Show more

9 Read more

Word Sense Induction & Disambiguation Using Hierarchical Random Graphs

Word Sense Induction & Disambiguation Using Hierarchical Random Graphs

Another graph-based method is presented in (Dorow and Widdows, 2003). They extract only noun neighbours that appear in conjunctions or dis- junctions with the target word. Additionally, they extract second-order co-occurrences. Nouns are rep- resented as vertices, while edges between vertices are drawn, if their associated nouns co-occur in con- junctions or disjunctions more than a given num- ber of times. This co-occurrence frequency is also used to weight the edges. The resulting graph is then pruned by removing the target word and ver- tices with a low degree. Finally, the MCL algorithm (Dongen, 2000) is used to cluster the graph and pro- duce a set of clusters (senses) each one consisting of a set of contextually related words.
Show more

11 Read more

Relation Prediction for Unseen Entities Using Entity Word Graphs

Relation Prediction for Unseen Entities Using Entity Word Graphs

Japanese works and Entity2 is an American voice actor. However, the description of Entity1 does not directly indicate that Entity1 can be attributed to Japanese works. Therefore, the KGC model needs to recognize this from a few features such as the Japanese-English words “manga” and “anime”. The word “manga” also appears in the description of Entity3, which is a Japanese movie, and this de- scription also contains words specific to Japanese works such as “Japanese animated” and “Hayao Miyazaki” who is famous Japanese animator. Our method makes use of the graph structure, Entity1 is located near the Japanese works entities such as Entity3 on the graph, which propagates this infor- mation to Entity1. Therefore, our method recog- nizes that Entity1 is Japanese works, and can cor- rectly predict the relation.
Show more

6 Read more

Essentia: Mining Domain specific Paraphrases with Word Alignment Graphs

Essentia: Mining Domain specific Paraphrases with Word Alignment Graphs

Future work involves various directions. One direction is to derive domain-specific sentence templates from corpora. These templates can be useful for natural language generation in question- answering systems or dialogue systems. Second, the current method can be extended to mine para- phrases from a wide range of syntactic units other than verb phrases. Also, the word aligner can be improved to align prepositions more accurately, so that the generated alignment graph would reveal more paraphrases. Finally, E SSENTIA can also be used to identify linguistic patterns other than para- phrases, such as phatic expressions (e.g., “Excuse me”, “All right”), which will in turn allow us to identify the essential constituents of a sentence.
Show more

6 Read more

Words are Vectors, Dependencies are Matrices: Learning Word Embeddings from Dependency Graphs

Words are Vectors, Dependencies are Matrices: Learning Word Embeddings from Dependency Graphs

Within computational linguistics, most research on word-meaning has been focusing on developing Dis- tributional Semantic Models (DSMs), based on the hypothesis that a word’s sense can be inferred from the contexts it appears in (Harris, 1954). DSMs associate each word with a vector (a.k.a. word embed- ding) that encodes information about its co-occurrence with other words in the vocabulary. In recent work, the most popular DSMs learn the embeddings using neural-network architectures. In particular, the Skip-gram model of Mikolov et al. (2013) has gained a lot of traction due to its efficiency and high quality representations. Skip-gram embeddings are trained with an objective that forces them to be sim- ilar to the vectors of their words’ contexts. The latter, context-word vectors, are a separate parameter of the model jointly learned along with the main target-word vectors. Like most DSMs, Mikolov et al. (2013)’s model derives contexts of a word from a pre-defined window of words that surround it.
Show more

12 Read more

Regional Cricket Identities: The construction of class narratives and their relationship to contemporary supporters

Regional Cricket Identities: The construction of class narratives and their relationship to contemporary supporters

Microsoft Word StoneBolton presentation with graphs University of Huddersfield Repository Stone, Duncan Regional Cricket Identities The construction of class narratives and their relationship to conte[.]

30 Read more

Using Domain Knowledge about Medications to Correct Recognition Errors in Medical Report Creation

Using Domain Knowledge about Medications to Correct Recognition Errors in Medical Report Creation

A new concept edge is inserted into the word graph for each path matching one of the generated spoken forms of the medications data base. The in- serted concept edges span from the first matching node to the last matching node on the path. Fig- ure 2 shows the word graph from Figure 1 with an in- serted concept edge (in bold). For each inserted con- cept edge, new concept-edge attributes are assigned containing the IDs of the original edges as children, their added scores plus an additional concept score and the sequence of words. Since no large-scale ex- periments have yet been carried out, so far the con- cept score which is added to the individual scores of the children is an arbitrary number which improves the score of the medication subpath in constrast to
Show more

7 Read more

The Relevance of Computational Techniques for Literary Interpretations on a Micro-Level

The Relevance of Computational Techniques for Literary Interpretations on a Micro-Level

covering the bigger chunks of text, earlier on in the study. The case study thus shows how just using a set of computational methods did not account for all words in Siddhartha that are connected to the four themes. To really find each and every single word, four lists of words would have to be made manually. This is why it is important to always keep in mind that graphs are indeed not as objective as they seem, as well as to always combine a close reading with a distant reading, even if just to check if the data is accurate. Lastly, of course, this shows the importance of transparency when it comes to the data that is used for an argumentation. As Peter Verhaar writes: ‘Counts are rarely neutral, as they invariably demand decisions on what and how to count. Discussions about counts are not trivial, as decisions on how to count can directly have a strong impact on the outcomes of subsequent statistical processing.’ 120
Show more

101 Read more

Show all 10000 documents...