LAMPAT: Low-Rank Adaption for Multilingual Paraphrasing Using Adversarial Training

Authors

  • Khoi M. Le VinAI Research, Vietnam Ho Chi Minh City University of Technology (HCMUT), VNU-HCM, Ho Chi Minh City, Vietnam
  • Trinh Pham Ho Chi Minh City University of Technology (HCMUT), VNU-HCM, Ho Chi Minh City, Vietnam
  • Tho Quan Ho Chi Minh City University of Technology (HCMUT), VNU-HCM, Ho Chi Minh City, Vietnam
  • Anh Tuan Luu Nanyang Technological University, Singapore

DOI:

https://doi.org/10.1609/aaai.v38i16.29804

Keywords:

NLP: Machine Translation, Multilinguality, Cross-Lingual NLP, NLP: Generation

Abstract

Paraphrases are texts that convey the same meaning while using different words or sentence structures. It can be used as an automatic data augmentation tool for many Natural Language Processing tasks, especially when dealing with low-resource languages, where data shortage is a significant problem. To generate a paraphrase in multilingual settings, previous studies have leveraged the knowledge from the machine translation field, i.e., forming a paraphrase through zero-shot machine translation in the same language. Despite good performance on human evaluation, those methods still require parallel translation datasets, thus making them inapplicable to languages that do not have parallel corpora. To mitigate that problem, we proposed the first unsupervised multilingual paraphrasing model, LAMPAT (Low-rank Adaptation for Multilingual Paraphrasing using Adversarial Training), by which monolingual dataset is sufficient enough to generate a human-like and diverse sentence. Throughout the experiments, we found out that our method not only works well for English but can generalize on unseen languages as well. Data and code are available at https://github.com/phkhanhtrinh23/LAMPAT.

Published

2024-03-24

How to Cite

Le, K. M., Pham, T., Quan, T., & Luu, A. T. (2024). LAMPAT: Low-Rank Adaption for Multilingual Paraphrasing Using Adversarial Training. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 18435-18443. https://doi.org/10.1609/aaai.v38i16.29804

Issue

Section

AAAI Technical Track on Natural Language Processing I