Kimera, RichardRim, Daniela N.Choi, Heeyoul2023-01-272023-01-272023Kimera, R., Rim, D. N., & Choi, H. (2023). Building a Parallel Corpus and Training Translation Models Between Luganda and English. arXiv preprint arXiv:2301.02773.https://doi.org/10.48550/arXiv.2301.02773https://nru.uncst.go.ug/handle/123456789/7349Neural machine translation (NMT) has achieved great successes with large datasets, so NMT is more premised on high-resource languages. This continuously underpins the low resource languages such as Luganda due to the lack of high-quality parallel corpora, so even 'Google translate' does not serve Luganda at the time of this writing. In this paper, we build a parallel corpus with 41,070 pairwise sentences for Luganda and English which is based on three different open-sourced corpora. Then, we train NMT models with hyper-parameter search on the dataset. Experiments gave us a BLEU score of 21.28 from Luganda to English and 17.47 from English to Luganda. Some translation examples show high quality of the translation. We believe that our model is the first Luganda-English NMT model. The bilingual dataset we built will be available to the public.enLuganda, neural machine translation, Transformer, hyper-parameterBuilding a Parallel Corpus and Training Translation Models Between Luganda and EnglishArticle