5-9 juin 2023 PARIS (France)
Evaluating the Generalization Property of Prefix-based Methods for Data-to-text Generation
Clarine Vongpaseut  1, 2, *@  , Alberto Lumbreras  1@  , Mike Gartrell  1@  , Patrick Gallinari  1, 3@  
1 : Criteo AI Lab
Criteo [Paris]
2 : Département Ingénierie Mathématique et Informatique, École des Ponts ParisTech
Ecole des Ponts ParisTech
3 : Institut des Systèmes Intelligents et de Robotique
Centre National de la Recherche Scientifique, Sorbonne Université, Centre National de la Recherche Scientifique : UMR7222
* : Auteur correspondant

Fine-tuning is the prevalent paradigm to adapt pre-trained language models to downstream tasks. Lightweight fine-tuning methods, such as prefix-tuning, only tune a small set of parameters which alleviates cost. Such methods were shown to achieve results similar to fine-tuning; however, performance can decrease when the inputs get farther from the training domain. Moreover, latest works questioned the efficiency of recent lightweight fine-tuning techniques depending on the task and the size of the model. In this paper, we propose to evaluate the generalization property of prefix-based methods depending on the size of the pre-trained language model in the multi-domain setting on data-to-text generation. We found that their performance depends heavily on the size of the model.


Personnes connectées : 3 Vie privée
Chargement...