A Character level Decoder without Explicit Segmentation for Neural Machine Translation
Full text
Figure
Related documents
framework for machine translation ( Kalchbrenner and Blunsom , 2013 ; Cho et al. , 2014 ), which employs a recurrent neural net- work (RNN) encoder to model the source
A Character Aware Encoder for Neural Machine Translation Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics Technical Papers, pages 3063?3070,
Nonparametric Word Segmentation for Machine Translation Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 815?823, Beijing, August
6.1 Translation Task: Large Track NIST We first report the experiments using our mono- lingual unigram Dirichlet Process model for word segmentation on the NIST machine translation
End-to-end training makes the neural ma- chine translation (NMT) architecture sim- pler, yet elegant compared to traditional statistical machine translation (SMT). However, little
We did not consider phrase-based statistical machine translation (PBSMT) and hierarchical phrase-based statistical machine translation (HPBSMT), because the OSM approach achieved
We experiment in two different scenarios: 1) a bilin- gual setting where we train a model on data from a single language pair; and 2) a multilingual setting where the task
Tibetan texts are all word segmentation pre-processed in traditional Tibetan machine translations (Guan, 2015). In this article, the traditional method of Tibetan