• No results found

Findings of the 2019 Conference on Machine Translation (WMT19)

N/A
N/A
Protected

Academic year: 2020

Share "Findings of the 2019 Conference on Machine Translation (WMT19)"

Copied!
61
0
0

Loading.... (view fulltext now)

Full text

Loading

Figure

Figure 1:nlp_library Statistics for the training sets used in the translation task. The number of words and the number of distinct words(case-insensitive) is based on the provided tokenizer and IndicNLP (https://github.com/anoopkunchukuttan/indic_) for Gujarati.
Figure 2: Statistics for the training and test sets used in the translation task. The number of words and the number of distinctwords (case-insensitive) is based on the provided tokenizer and IndicNLP (https://github.com/anoopkunchukuttan/indic_nlp_library) for Gujarati.
Table 7: Summary of human evaluation configurations;M denotes reference-based/monolingual human evaluationin which the machine translation output was comparedto human-generated reference; B denotes bilingual/source-based evaluation where the human annotators evaluated MToutput by reading the source language input only (no refer-ence translation present); configurations comprising officialresults highlighted in bold.
Table 8: Amount of data collected in the WMT19 manual evaluation campaign (after removal of quality control items)
+7

References

Related documents

In this paper, we describe our supervised neural machine translation (NMT) systems that we developed for the news translation task for Kazakh ↔ English, Gujarati ↔ English, Chinese

This paper describes a machine translation test set of documents from the auditing domain and its use as one of the “test suites” in the WMT19 News Translation Task for

Measurement of Progress in Machine Translation Yvette Graham Timothy Baldwin Aaron Harwood Alistair Moffat Justin Zobel Department of Computing and Information Systems The University

We propose a neural machine translation (NMT) approach that, instead of pursuing ad- equacy and fluency (“human-oriented” qual- ity criteria), aims to generate translations that

This paper proposed a recurrent neural networks ap- proach using quality vectors for estimating the qual- ity of machine translation output at sentence level.. 11 The ranking variant

This consisted of five translation tasks: Machine Translation of News, Machine Translation of IT domain, Biomedical Translation, Multimodal Machine Translation, and

This paper presents the results of the WMT16 shared tasks, which included five machine translation (MT) tasks (standard news, IT-domain, biomedical, multimodal, pronoun),

This paper presents the results of the WMT16 shared tasks, which included five machine translation (MT) tasks (standard news, IT-domain, biomedical, multimodal, pronoun),