计算机工程与应用 ›› 2022, Vol. 58 ›› Issue (12): 310-316.DOI: 10.3778/j.issn.1002-8331.2011-0275

• 工程与应用 • 上一篇    

非自回归翻译模型在蒙汉翻译上的应用

赵旭,苏依拉,仁庆道尔吉,石宝   

  1. 内蒙古工业大学 信息工程学院,呼和浩特 010080
  • 出版日期:2022-06-15 发布日期:2022-06-15

Application of Non-Autoregressive Translation Model in Mongolian and Chinese Translation

ZHAO Xu, SU Yila, RENQING Dao’erji, SHI Bao   

  1. College of Information Engineering, Inner Mongolia University of Technology, Hohhot 010080, China
  • Online:2022-06-15 Published:2022-06-15

摘要: 当前大多数机器翻译模型都属于自回归模型,不支持解码器并行生成翻译结果,且生成速率过低。针对当前自回归模型中存在的问题,基于Transformer和非自回归Transformer(non-autoregressive Transformer,NAT)的翻译模型进行实验,对蒙汉语料进行知识蒸馏和语跨语言词语嵌入的处理。实验结果表明,引入知识蒸馏的NAT模型在BLEU值方面有显著提升,同时也提高了模型生成速率。NAT模型进行知识蒸馏与跨语言词嵌入处理后能显著减少源语言和目标语言之间的依赖关系,提高蒙汉机器翻译的BLEU值,相比Transformer模型,BLEU值提高了2.8,时间消耗减少了19.34?h。

关键词: Transformer模型, NAT模型, 知识蒸馏, 跨语言词嵌入

Abstract: Most machine translation models are autoregressive models. To solve the problems existing in the autoregressive models, this paper conducts knowledge distillation and cross-language word embedding on corpus, on the translation models of Transformer and non-autoregressive Transformer. Experimental results show that non-autoregressive Transformer models with knowledge distillation achieve significant improvements in terms of BLEU and improve the generation rate. Experimental results show that knowledge distillation can significantly reduce the dependence between source and target language and improve the BLEU of Mongolian-Chinese machine translation, compared to Transformer models, BLEU values are improved by 2.8 and time consumption is reduced by 19.34 hours.

Key words: Transformer models, non-autoregressive Transformer(NAT), knowledge distillation, cross-language word embedding