Principle of compositionality

Top PDF Principle of compositionality:

LP 1996 03: 
  Compositionality

LP 1996 03: Compositionality

However, a compositional treatment for this kind of discourse phenomena is quite feasible. In fact, the principle of compositionality itself points to a solution. Since (3) and (4) have identical truth-conditions, a richer notion of meaning is required if the principle of compositionality is to be saved for discourses. Truth-conditions of sentences (which involve possible worlds and assignments to free variables) are just one aspect of meaning. Another aspect is that the preceding discourse has a bearing on the interpretation of a sentence (and especially of the so called discourse pronouns). Moreover, the sentence itself extends this discourse and thus has a bearing on sentences that follow it. Hence a notion of meaning is required which takes the semantic contribution into account that a sentence makes to a discourse. Sentences (3) and (4) make different contributions to the meaning of the discourse, especially concerning the interpretation of later discourse pronouns. These ideas have led to Dynamic Predicate Logic (henceforth ’DPL’). It is a compositional theory that accounts not only for the phenomena that are treated in DRT, but for other phenomena as well, see Groenendijk & Stokhof (1991). Thus we see that the program to require compositionality has suggested a particular solution.
Show more

54 Read more

Unsupervised Compositionality Prediction of Nominal Compounds

Unsupervised Compositionality Prediction of Nominal Compounds

For representing the meaning of individual words and their combinations in com- putational systems, distributional semantic models (DSMs) have been widely used. DSMs are based on Harris’ distributional hypothesis that the meaning of a word can be inferred from the context in which it occurs (Harris 1954; Firth 1957). In DSMs, words are usually represented as vectors that, to some extent, capture cooccurrence patterns in corpora (Lin 1998; Landauer, Foltz, and Laham 1998; Mikolov et al. 2013; Baroni, Dinu, and Kruszewski 2014). Evaluation of DSMs has focused on obtaining accurate semantic representations for words, and state-of-the-art models are already capable of obtaining a high level of agreement with human judgments for predicting synonymy or similarity between words (Freitag et al. 2005; Camacho-Collados, Pilehvar, and Navigli 2015; Lapesa and Evert 2017) and for modeling syntactic and semantic analogies be- tween word pairs (Mikolov, Yih, and Zweig 2013). These representations for individual words can also be combined to create representations for larger units such as phrases, sentences, and even whole documents, using simple additive and multiplicative vector operations (Mitchell and Lapata 2010; Reddy, McCarthy, and Manandhar 2011; Mikolov et al. 2013; Salehi, Cook, and Baldwin 2015), syntax-based lexical functions (Socher et al. 2012), or matrix and tensor operations (Baroni and Lenci 2010; Bride, Van de Cruys, and Asher 2015). However, it is not clear to what extent this approach is adequate in the case of idiomatic multiword expressions (MWEs). MWEs fall into a wide spectrum of compositionality; that is, some MWEs are more compositional (e.g., olive oil) while others are more idiomatic (Sag et al. 2002; Baldwin and Kim 2010). In the latter case, the meaning of the MWE may not be straightforwardly related to the meanings of its parts, creating a challenge for the principle of compositionality (e.g., snake oil as a product of questionable benefit, not necessarily an oil and certainly not extracted from snakes).
Show more

57 Read more

A Tensor based Factorization Model of Semantic Compositionality

A Tensor based Factorization Model of Semantic Compositionality

The principle of compositionality, often attributed to Frege, is the principle that states that the meaning of a complex expression is a function of the meaning of its parts and the way those parts are (syntactically) combined (Frege, 1892). It is the fundamental prin- ciple that allows language users to understand the meaning of sentences they have never heard before, by constructing the meaning of the complex expres- sion from the meanings of the individual words. Re- cently, a number of researchers have tried to reconcile the framework of distributional semantics with the principle of compositionality (Mitchell and Lapata, 2008; Baroni and Zamparelli, 2010; Coecke et al., 2010; Socher et al., 2012). However, the absolute gains of the systems remain a bit unclear, and a sim- ple method of composition – vector multiplication – often seems to produce the best results (Blacoe and Lapata, 2012).
Show more

10 Read more

COMPOSITIONALITY/NON-COMPOSITIONALITY OF IDIOMS: NON-NATIVE SPEAKERS’ CONSTRAINTS TO COMPREHENSION

COMPOSITIONALITY/NON-COMPOSITIONALITY OF IDIOMS: NON-NATIVE SPEAKERS’ CONSTRAINTS TO COMPREHENSION

Native speakers of English easily understand idiomatic expressions. Their daily utterances are littered with many idiomatic expressions that would sound strange or even weird to non-native speakers. Idioms such as bring home the bacon, with flying colors, or steal the show are regarded as phrases whose meaning cannot be deduced from the literal meaning of their individual constituents. As a consequence, idioms do not generally follow the principle of compositionality which contends that the meaning of the constituent parts of a complex expression and the way they are syntactically combined determines the meaning of the expression (van der Linden, 1993, cited in Vegge, 2011). On the other hand, idioms are also said to be non- compositional in that, the meaning of the expression is not determined by the individual meanings of the constituent parts of the expression. Hence, non- native speakers whose language repertoire is constrained by the structure to which they are exposed to find themselves presented with rather puzzling constructions. For example, if a non-native speaker would not have any exposure at all to the use of the idiom bring home the bacon, the non- native speaker would likely understand this phrase on the literal level, obviously quite an unfortunate state of affairs.
Show more

10 Read more

Computing Semantic Compositionality in Distributional Semantics

Computing Semantic Compositionality in Distributional Semantics

Let us reconsider the highly underspecified definition of the Principle of Compositionality. Let us start by setting the syntactic relation that we want to focus on for the purposes of this study: following Guevara (2010) and Baroni and Zamparelli (2010), I model the semantic composition of adjacent Adjective-Noun pairs expressing attributive modification of a nominal head. In a second analogous experiment, I also model the syntactic relation between adjacent Verb-Noun expressing object selection by the verbal head. The complex expression and its parts are, respectively, adjacent Adjective-Noun and Verb-Noun 1 pairs and their corresponding constituents (respectively, adjectives and nouns, verbs and nouns) extracted from the British National Corpus. Furthermore, the meaning of both complex expressions and their constituents is assumed to be the multidimensional context vectors obtained by building semantic spaces. What remains to be done, therefore, is to model the function combining meanings of the constituent parts to yield the meaning of the resulting complex expression. This is precisely the main assumption made in this article. Since we are dealing with multidimensional vector representations of meaning, we suggest that compositionality can be interpreted as a linear transformation function mapping two
Show more

10 Read more

LP 1998 14: 
  Words in Contexts: Fregean Elucidations

LP 1998 14: Words in Contexts: Fregean Elucidations

I want to suggest that the context principle must be invoked in order to understand how elucidations can confer meanings to words. Cooperative readers first try to sympathetically understand Frege's writings, look what r™le the primitive terms play in his theory, and finally take these r™les to determine their senses. But Frege has another principle, equally important as the context principle: the principle of compositionality (which we have already mentioned). Now in an obvious way, the idea of compositionality seems to run counter to the context principle. Whereas the latter says that the meaning of words can only be determined from an antecedent understanding of the sentences in which they occur, the former has it just the other way round. 19 The question is: What comes first, the parts or the
Show more

21 Read more

Unsupervised compositionality prediction of nominal compounds

Unsupervised compositionality prediction of nominal compounds

For representing the meaning of individual words and their combinations in com- putational systems, distributional semantic models (DSMs) have been widely used. DSMs are based on Harris’ distributional hypothesis that the meaning of a word can be inferred from the context in which it occurs (Harris 1954; Firth 1957). In DSMs, words are usually represented as vectors that, to some extent, capture cooccurrence patterns in corpora (Lin 1998; Landauer, Foltz, and Laham 1998; Mikolov et al. 2013; Baroni, Dinu, and Kruszewski 2014). Evaluation of DSMs has focused on obtaining accurate semantic representations for words, and state-of-the-art models are already capable of obtaining a high level of agreement with human judgments for predicting synonymy or similarity between words (Freitag et al. 2005; Camacho-Collados, Pilehvar, and Navigli 2015; Lapesa and Evert 2017) and for modeling syntactic and semantic analogies be- tween word pairs (Mikolov, Yih, and Zweig 2013). These representations for individual words can also be combined to create representations for larger units such as phrases, sentences, and even whole documents, using simple additive and multiplicative vector operations (Mitchell and Lapata 2010; Reddy, McCarthy, and Manandhar 2011; Mikolov et al. 2013; Salehi, Cook, and Baldwin 2015), syntax-based lexical functions (Socher et al. 2012), or matrix and tensor operations (Baroni and Lenci 2010; Bride, Van de Cruys, and Asher 2015). However, it is not clear to what extent this approach is adequate in the case of idiomatic multiword expressions (MWEs). MWEs fall into a wide spectrum of compositionality; that is, some MWEs are more compositional (e.g., olive oil) while others are more idiomatic (Sag et al. 2002; Baldwin and Kim 2010). In the latter case, the meaning of the MWE may not be straightforwardly related to the meanings of its parts, creating a challenge for the principle of compositionality (e.g., snake oil as a product of questionable benefit, not necessarily an oil and certainly not extracted from snakes).
Show more

58 Read more

Analyzing Compositionality-Sensitivity of NLI Models

Analyzing Compositionality-Sensitivity of NLI Models

It is worth noting that we do not wish this subset to be used as a benchmark for models to compete on, but rather an analysis tool to explicitly reveal models’ compositionality- awareness. Even though the testing setup has its own limita- tions such as data sparsity and noisiness, it still serves as an initial step to highlight the problem of compositionality un- derstanding (and gain some important insights into models’ behaviors, as shown below), which has been largely unex- plored in the current neural literature. But more importantly, we hope that this inspires future works on data collection that explicitly address the issue by adding compositionality requirements and lexical-feature balancing into the collec- tion process.
Show more

8 Read more

On the Compositionality and Semantic Interpretation of English Noun Compounds

On the Compositionality and Semantic Interpretation of English Noun Compounds

In the experiments described next the architec- ture of the composition module varies according to the method used for creating compound rep- resentations, while the classification modul[r]

13 Read more

A Context Theoretic Framework for Compositionality in Distributional Semantics

A Context Theoretic Framework for Compositionality in Distributional Semantics

It seems natural to consider whether the lattice structure that is inherent in the vector representations used in computational linguistics can be used to model entailment... We believe [r]

31 Read more

The routes of sense : thought, semantic underdeterminacy and compositionality

The routes of sense : thought, semantic underdeterminacy and compositionality

representations” and there are as many such representations as there are ways for the sentence to be ambiguous (Sperber and Wilson 1986/1995: 193). Note already one rst tension here: ambiguity is ambiguity in the fully expanded content, yet it is identi able as the ambiguity it is already at the level of assumption schemata; the representations themselves are at best “fragmentary” and “incomplete” representations of thoughts. So in turn each of the ( rst-level) disambiguated representations will then be further enriched (expanded, narrowed) according to the communicative intentions in a particular context. We thus have at least three layers of listing and ranking according to the principle of relevance and the various maxims inherited from the Gricean framework. One may reply that all along we merely identify incomplete logical forms (the notion seems perfectly coherent): but in the very same page we are told that these forms are never present to consciousness. e rankings are always done on the fully explicated content. I nd this picture deeply awed for the reasons discussed in the text.
Show more

264 Read more

Software engineering: a roadmap

Software engineering: a roadmap

Compositionality Change NF Properties Service view Perspectives Lifecycles Architecture Configurability Domain specificity.. Links.[r]

19 Read more

Proceedings of the Workshop on Distributional Semantics and Compositionality

Proceedings of the Workshop on Distributional Semantics and Compositionality

Any NLP system that does semantic processing relies on the assumption of semantic compositionality: the meaning of a phrase is determined by the meanings of its parts and their combination. For this, it is necessary to have automatic methods that are capable to reproduce the compositionality of language. Recent years have shown the renaissance of interest in distributional semantics. While distributional methods in semantics have proven to be very efficient in tackling a wide range of tasks in natural language processing, e.g., document retrieval, clustering and classification, question answering, query expansion, word similarity, synonym extraction, relation extraction, and many others, they are still strongly limited by being inherently word-based. The main hurdle for vector space models to further progress is the ability to handle compositionality.
Show more

10 Read more

Recurrent Convolutional Neural Networks for Discourse Compositionality

Recurrent Convolutional Neural Networks for Discourse Compositionality

The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sen- tences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model cor- responding to the two levels of composi- tionality. The sentence model adopts con- volution as the central operation for com- posing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recur- rent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequen- tiality of sentences and the interaction be- tween different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model cou- pled to the sentence model obtains state of the art performance on a dialogue act clas- sification experiment.
Show more

8 Read more

Confirming the Non compositionality of Idioms for Sentiment Analysis

Confirming the Non compositionality of Idioms for Sentiment Analysis

Idioms, a subset of MWEs, are particularly challenging to analyze because they are non- compositional: the meaning of the entire idiom cannot be deduced from the definitions of each in- dividual word in it (Jochim et al., 2018). Treat- ing idioms like “it’s raining cats and dogs” with a words-with-spaces approach can diminish the ac- curacy of a model that treats each word as the smallest unit of a sentence; the example idiom simply means that it is raining heavily and is unre- lated to animals. Along with meaning, past work has already shown that ignoring idioms in senti- ment analysis tasks will lower the accuracy of a sentiment classifier (Williams et al., 2015), but the non-compositionality of idiom sentiment is not in- cluded in the currently acknowledged definition of an idiom and should not be immediately assumed without further research.
Show more

5 Read more

Modeling Semantic Compositionality of Croatian Multiword Expressions

Modeling Semantic Compositionality of Croatian Multiword Expressions

those with a score in the h3, 5] range are labeled as compo- sitional (C). This gave us 44 compositional (C) and 56 non- compositional MWEs in the test set. We consider only the best-performing model from the previous experiment (the Skip-gram linear combination model). The model predicts C (positive class) if the prediction of the linear combination model defined by (2) is above a certain threshold, other- wise it predicts NC (negative class). We set the threshold to t = 3.11, obtained by optimizing the F1-score on the train set. The results are shown in Table 5. The overall classifica- tion accuracy is 0.64. The accuracy is higher for adjective- noun MWEs (0.72) than for verbal MWEs (0.54), which is in line with the results from the previous experiment. Preci- sion is substantially lower than recall (0.56 vs. 0.82), indi- cating that the model more often predicts compositionality for a non-compositional MWE than the other way around, i.e., the predictions for non-compositional MWEs are often higher than they ought to be. Our model outperforms the accuracy of a majority class (NC) baseline, which is 0.56, but not the F1-score, which is 0.72.
Show more

9 Read more

Abui Tripartite Verbs: Exploring the limits of compositionality

Abui Tripartite Verbs: Exploring the limits of compositionality

(It is thus unfortunate that English must be used for the glossing of such generic verbs.) The Kalam generic verbs often combine with each other. However, in the Kalam system, a root ver[r]

25 Read more

Learning Character level Compositionality with Visual Features

Learning Character level Compositionality with Visual Features

Compositionality—the fact that the meaning of a complex expression is determined by its structure and the meanings of its constituents—is a hall- mark of every natural language (Frege and Austin, 1980; Szab´o, 2010). Recently, neural models have provided a powerful tool for learning how to com- pose words together into a meaning representation of whole sentences for many downstream tasks. This is done using models of various levels of sophistication, from simpler bag-of-words (Iyyer et al., 2015) and linear recurrent neural network (RNN) models (Sutskever et al., 2014; Kiros et al., 2015), to more sophisticated models using tree-
Show more

10 Read more

A Dataset for Noun Compositionality Detection for a Slavic Language

A Dataset for Noun Compositionality Detection for a Slavic Language

The resulting dataset consists of 220 compound phrases with several full sentence contexts, col- lected from source texts. The number of contexts is not fixed. So far the contexts are not annotated. A few examples are provided in Table 2. Table 3 presents the cross-tabulation of compound pat- tern and compound compositionality. Each com- pound is provided with a sentence context. The number of contexts is not fixed as we extract all contexts that contain the compound from the UD treebanks. The contexts so far are not used in the experiments. However, one of the possible direc- tions for the future work would be compound dis- ambiguation, based on the contexts. Examples of the compound contexts are presented in Figure 1.
Show more

7 Read more

Measuring MWE compositionality using semantic annotation

Measuring MWE compositionality using semantic annotation

In this paper, we explored an algorithm based on a semantic lexicon for automatically measur- ing the compositionality of MWEs. In our evaluation, the output of this algorithm showed moderate correlation with a manual ranking. We claim that semantic lexical resources provide another approach for automatically measuring MWE compositionality in addition to the exist- ing statistical algorithms. Although our results are not yet conclusive due to the moderate scale of the test data, our evaluation demonstrates the potential of lexicon-based approaches for the task of compositional analysis. We foresee, by combining our approach with statistical algo- rithms, that further improvement can be ex- pected.
Show more

10 Read more

Show all 10000 documents...