Abstract (EN):
To overcome the lack of annotated resources in less-resourced languages, unsupervised language adaptation methods have been explored. Based on multilingual word embeddings, Adversarial Training has been successfully employed in a variety of tasks and languages. With recent neural language models, empirical analysis on the task of natural language inference suggests that more challenging auxiliary tasks for Adversarial Training should be formulated to further improve language adaptation. We propose rethinking such auxiliary tasks for language adaptation. © 2021 ESANN Intelligence and Machine Learning. All rights reserved.
Idioma:
Inglês
Tipo (Avaliação Docente):
Científica