• No results found

[PDF] Top 20 Better, Faster, Stronger Sequence Tagging Constituent Parsers

Has 10000 "Better, Faster, Stronger Sequence Tagging Constituent Parsers" found on our website. Below are the top 20 most common "Better, Faster, Stronger Sequence Tagging Constituent Parsers".

Better, Faster, Stronger Sequence Tagging Constituent Parsers

Better, Faster, Stronger Sequence Tagging Constituent Parsers

... Sequence tagging models for constituent pars- ing are faster, but less accurate than other types of ...such constituent parsers: (a) high error rates around closing brackets of ... See full document

12

Joshua 5 0: Sparser, Better, Faster, Server

Joshua 5 0: Sparser, Better, Faster, Server

... Joshua 5.0: Sparser, better, faster, server Matt Post1 and Juri Ganitkevitch2 and Luke Orland1 and Jonathan Weese2 and Yuan Cao2 1 Human Language Technology Center of Excellence 2 Center[r] ... See full document

7

Sparser, Better, Faster GPU Parsing

Sparser, Better, Faster GPU Parsing

... The Viterbi algorithm is a reasonably effective method for parsing. However, many authors have noted that parsers benefit substantially from minimum Bayes risk decoding (Goodman, 1996; Simaan, 2003; Matsuzaki et ... See full document

10

Intersecting Multilingual Data for Faster and Better Statistical Translations

Intersecting Multilingual Data for Faster and Better Statistical Translations

... Various filtering techniques, such as (Johnson et al., 2007) and (Chen et al., 2008), have been ap- plied to eliminate a large portion of the translation rules that were judged unlikely to be of value for the current ... See full document

9

Direct Training for Spiking Neural Networks: Faster, Larger, Better

Direct Training for Spiking Neural Networks: Faster, Larger, Better

... significantly better accuracy than the re- ported works on neuromorphic datasets (N-MNIST and DVS- CIFAR10), and comparable accuracy as existing ANNs and pre-trained SNNs on non-spiking datasets ... See full document

8

Smarter, Better, Faster, Stronger: The Informationalized Infrastructural Ideal.

Smarter, Better, Faster, Stronger: The Informationalized Infrastructural Ideal.

... Customers simply swipe their membership card, which authenticates the car and subscription via the Operations Center to activate the switch. The rest of the process is automated, similar to going through a car wash, so ... See full document

263

Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression

Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression

... converging faster for quadratic function since for large n the convergence at rate 1/n 2 becomes slower than the one at rate (1 − γµ) n even for very small ... See full document

51

Better, Stronger, Faster Explaining the Variance Between Professional and Amateur Anti Doping Policies

Better, Stronger, Faster Explaining the Variance Between Professional and Amateur Anti Doping Policies

... Carr (1999) states, “because drugs make it possible to achieve better performances without additional training, they are unethical: sporting achievements should be earned by athletes” (in Miah 2004: 18). The ... See full document

114

Harder,  Better,  Faster,  Stronger -  Elliptic  Curve  Discrete  Logarithm  Computations  on  FPGAs

Harder, Better, Faster, Stronger - Elliptic Curve Discrete Logarithm Computations on FPGAs

... ECC Breaker versus Related Work. (i) In the current setup, the interface between ECC Breaker and a desk- top is a simple, slow, serial interface. This might be a challenge for related implementations, but not for ECC ... See full document

12

Sequence Tagging for Verb Conjugation in Romanian

Sequence Tagging for Verb Conjugation in Romanian

... After comparing multiple discriminative train- ing methods for CRFs, we have not observed sig- nificant variation between their results in terms of accuracy. This is not unexpected, given the small size of the dataset. ... See full document

6

Multi task Domain Adaptation for Sequence Tagging

Multi task Domain Adaptation for Sequence Tagging

... that better generalize for domain ...for sequence tagging problems consider- ing two tasks: Chinese word segmenta- tion and named entity ...works better than disjoint domain adaptation for ... See full document

10

NiuParser: A Chinese Syntactic and Semantic Parsing Toolkit

NiuParser: A Chinese Syntactic and Semantic Parsing Toolkit

... POS tagging, named entity recognition, shallow syntactic parsing (chunking), constituent parsing, dependency parsing, and constituent parse-based semantic role ...word sequence, each word in ... See full document

6

Named-Entity Tagging and Domain adaptation for Better Customized Translation

Named-Entity Tagging and Domain adaptation for Better Customized Translation

... Customized translation need pay spe- cial attention to the target domain ter- minology especially the named- entities for the domain. Adding lin- guistic features to neural machine translation (NMT) has been shown to ... See full document

6

Dependency Parser for Chinese Constituent Parsing

Dependency Parser for Chinese Constituent Parsing

... and constituent parsing over a sentence are strongly related, they should be benefited from each ...that constituent parsing may be smoothly altered to fit dependency ...to constituent structure, it ... See full document

6

An Efficient Implementation of the Head Corner Parser

An Efficient Implementation of the Head Corner Parser

... In Section 7, I compare the head-corner parser with the other parsers implemented in the Programme for the OVIS application and show that the head-corner parser op- erates much faster th[r] ... See full document

32

Evaluation of a Sequence Tagging Tool for Biomedical Texts

Evaluation of a Sequence Tagging Tool for Biomedical Texts

... NeuroNER 4 (Dernoncourt et al., 2017) targets non-expert users and is based on Lample’s Bi- LSTM model. The authors intended to make the tool easy to use by providing automatic format conversion from the brat format ... See full document

11

A Hierarchical Neural Model for Learning Sequences of Dialogue Acts

A Hierarchical Neural Model for Learning Sequences of Dialogue Acts

... Independent DA classification. In this ap- proach, each utterance is treated as a separate in- stance, which allows the application of general classification algorithms. Julia et al. (2010) em- ployed a Support Vector ... See full document

10

Improved Unsupervised POS Induction Using Intrinsic Clustering Quality and a Zipfian Constraint

Improved Unsupervised POS Induction Using Intrinsic Clustering Quality and a Zipfian Constraint

... Table 3: Comparison of our algorithms with the recent fully unsupervised POS taggers for which results are reported. HK: (Haghighi and Klein, 2006), trained and evaluated with a corpus of 193K tokens and 45 induced tags. ... See full document

10

Answer Extraction as Sequence Tagging with Tree Edit Distance

Answer Extraction as Sequence Tagging with Tree Edit Distance

... a sequence tagging problem by deploying a fast and compact CRF model with simple features that capture many of the intuitions in prior “deep pipeline” ... See full document

10

Semi supervised sequence tagging with bidirectional language models

Semi supervised sequence tagging with bidirectional language models

... set, we decreased the model size to 512 hidden units with a 256 dimension projection and normal- ized tokens in the same manner as input to the se- quence tagging model (lower-cased, with all dig- its replaced ... See full document

10

Show all 10000 documents...