• No results found

[PDF] Top 20 Query Expansion with Locally Trained Word Embeddings

Has 10000 "Query Expansion with Locally Trained Word Embeddings" found on our website. Below are the top 20 most common "Query Expansion with Locally Trained Word Embeddings".

Query Expansion with Locally Trained Word Embeddings

Query Expansion with Locally Trained Word Embeddings

... embedding-based query expansion outperforms our query like- lihood baseline across all ...various embeddings in differ- ent ...embedding trained on the target corpus sig- nificantly ... See full document

11

Vector of Locally Aggregated Word Embeddings (VLAWE): A Novel Document level Representation

Vector of Locally Aggregated Word Embeddings (VLAWE): A Novel Document level Representation

... each word vector that is both present in the document and associated to the respective ...the word embeddings are pre- trained on a very large set of documents, ... See full document

7

Mathematical Information Retrieval based on Type Embeddings and Query Expansion

Mathematical Information Retrieval based on Type Embeddings and Query Expansion

... these query expansion models are also taken from Cummins et ...the query is expanded using the top s terms in the query as determined by document collection-wide TF-IDF ...a word ... See full document

12

Incorporating Semantic Word Representations into Query Expansion for Microblog Information Retrieval

Incorporating Semantic Word Representations into Query Expansion for Microblog Information Retrieval

... of-the-art query performance predictors in combi- nation with different retrieval models for microblog ...top-k query processing in a huge microblog dataset for compact indexing and judicious ...train ... See full document

11

Hyperspherical Query Likelihood Models with Word Embeddings

Hyperspherical Query Likelihood Models with Word Embeddings

... Recently, word embeddings, which are contin- uous vector representations embedding word se- mantic information, have been utilized for en- hancing the previous expansion techniques (Zhang et ... See full document

7

Exploiting Task Oriented Resources to Learn Word Embeddings for Clinical Abbreviation Expansion

Exploiting Task Oriented Resources to Learn Word Embeddings for Clinical Abbreviation Expansion

... candidate expansion of “RF” and its semantics is similar to the “RF” in the inten- sive care medicine texts, we can determine that it should be the correct expansion of ...apply word ... See full document

6

Fast Query Expansion on an Accounting Corpus using Sub Word Embeddings

Fast Query Expansion on an Accounting Corpus using Sub Word Embeddings

... Our objective here is to compare sub-word em- beddings with word embeddings and understand how the robustness to small character level pertur- bations affects the final recall after search. Since ... See full document

5

Learning Neural Word Salience Scores

Learning Neural Word Salience Scores

... from word-level semantic representations have used numerous linear algebraic operators such as vector addition, element-wise multiplication, multiplying by a matrix or a tensor [8, ...on word ... See full document

11

UDPipe at SIGMORPHON 2019: Contextualized Embeddings, Regularization with Morphological Categories, Corpora Merging

UDPipe at SIGMORPHON 2019: Contextualized Embeddings, Regularization with Morphological Categories, Corpora Merging

... In theory, concatenating all corpora of the same language should be always beneficial consider- ing the universal scheme used for annotation. Nonetheless, the merged model exhibits worse performance in many cases, ... See full document

9

Towards High Accuracy Named Entity Recognition for Icelandic

Towards High Accuracy Named Entity Recognition for Icelandic

... Another reason as to why word embeddings from a large external corpus are so beneficial for our model may be the underlying language. Ice- landic, a morphologically rich language, presents special ... See full document

7

Language Identification and Analysis of Code Switched Social Media Text

Language Identification and Analysis of Code Switched Social Media Text

... 3). Word-Character LSTM: This model is a replica- tion of the model proposed by Samih et ...has word and character ...nated word and character vectors to obtain the to- ken ... See full document

11

ReWE: Regressing Word Embeddings for Regularization of Neural Machine Translation Systems

ReWE: Regressing Word Embeddings for Regularization of Neural Machine Translation Systems

... regressing word embeddings (ReWE) as a new regularization technique in a system that is jointly trained to predict the next word in the translation (categorical value) and its word ... See full document

7

Word Alignment Modeling with Context Dependent Deep Neural Network

Word Alignment Modeling with Context Dependent Deep Neural Network

... with word alignment. As we do not have a large manually word aligned corpus, we use traditional word alignment models such as HMM and IBM model 4 to generate word align- ment on a large ... See full document

10

Natural Language Processing (Almost) from Scratch

Natural Language Processing (Almost) from Scratch

... classifiers trained with different tagging conventions (see Section ...then trained on the existing training set, while keeping the ten networks fixed (“joined ... See full document

45

Enhanced Word Representations for Bridging Anaphora Resolution

Enhanced Word Representations for Bridging Anaphora Resolution

... of word representations ...these word embeddings is not suitable for resolving bridging anaphora, which requires the knowledge of associative similarity ...create word embeddings ... See full document

7

A study of deep learning methods for de-identification of clinical notes in cross-institute settings

A study of deep learning methods for de-identification of clinical notes in cross-institute settings

... the bidirectional character-level LSTM had an output size of 25; the learning rate was fixed at 0.005; the input layer for the word-level LSTM applied a dropout at probability of 0.5; the stochastic gradient ... See full document

9

Unsupervised Multilingual Word Embedding with Limited Resources using Neural Language Models

Unsupervised Multilingual Word Embedding with Limited Resources using Neural Language Models

... Table 2 shows the results on the different- domain condition. It shows that our method achieves better results overall than the unsuper- vised baseline models. The extremely poor per- formance of Conneau et al. (2018) ... See full document

12

SWAP at SemEval 2019 Task 3: Emotion detection in conversations through Tweets, CNN and LSTM deep neural networks

SWAP at SemEval 2019 Task 3: Emotion detection in conversations through Tweets, CNN and LSTM deep neural networks

... we trained the model again for 10 times on 100 epochs, with a batch size of 64 using GoEmb + GTEmb for data embeddings with a validation set of 20% of train- ing data and an early stop when the micro F1 of ... See full document

6

Adapting Pre trained Word Embeddings For Use In Medical Coding

Adapting Pre trained Word Embeddings For Use In Medical Coding

... Word embeddings are a recent addition to an NLP researcher’s toolkit. They are dense, real-valued vector representations of words that capture inter- esting properties among them. Word ... See full document

5

DataSEARCH at IEST 2018: Multiple Word Embedding based Models for Implicit Emotion Classification of Tweets with Deep Learning

DataSEARCH at IEST 2018: Multiple Word Embedding based Models for Implicit Emotion Classification of Tweets with Deep Learning

... the word embedding models we used in our imple- ...the word embedding tech- niques and the dataset it is trained on and its spe- cific features as ...that word embedding in the next sections. ... See full document

6

Show all 10000 documents...