Resumo (PT):
Abstract (EN):
Unsupervised language adaptation aims to improve the cross-lingual ability of models that are fine-tuned on a specific task and source language, without requiring labeled data on the target language. On the other hand, recent multilingual language models (such as mBERT) have achieved new state-of-the-art results on a variety of tasks and languages, when employed in a direct transfer approach. In this work, we explore recently proposed unsupervised language adaptation methods - Adversarial Training and Encoder Alignment - to fine-tune language models on a specific task and language pairs, showing that the cross-lingual ability of the models can be further improved. We focus on two conceptually different tasks, Natural Language Inference and Sentiment Analysis, and analyze the performance of the explored models. In particular, Encoder Alignment is the best approach for most of the settings explored in this work, only underperforming in the presence of domain-shift between source and target languages.
Idioma:
Inglês
Tipo (Avaliação Docente):
Científica
Nº de páginas:
12