Evaluation of large-scale Multilingual Generative Language Models
Title: Evaluation of large-scale Multilingual Generative Language Models
DNr: Berzelius-2024-95
Project Type: LiU Berzelius
Principal Investigator: Magnus Boman <magnus.boman@ki.se>
Affiliation: Kungliga Tekniska högskolan
Duration: 2024-03-08 – 2024-10-01
Classification: 10208
Homepage: https://www.kth.se/profile/gogoulou
Keywords:

Abstract

In the previous two projects, we studied the effect of language order and model size on model performance in the specific case of sequential pre-training on one language at each time. In this project, we want to perform a systematic evaluation of the language model performance, not only in terms of perplexity as in the previous project, but also in terms of downstream task performance. The first research question we will study is how the order of pre-training languages and model size affect the model performance on various downstream tasks in the different languages that the model is trained on. In addition, we want to compare the performance of multilingual language models that differ on the training scheme. The first scheme is joint pre-training on multiple languages, which is the standard approach for training multilingual language models, and the second one is sequential pre-training the model on one language at a time, a method studied in our previous work. An additional factor we plan to investigate is language contamination and its effect on model performance.