ArXiv Preprint
While pretrained language models have exhibited impressive generalization
capabilities, they still behave unpredictably under certain domain shifts. In
particular, a model may learn a reasoning process on in-domain training data
that does not hold for out-of-domain test data. We address the task of
predicting out-of-domain (OOD) performance in a few-shot fashion: given a few
target-domain examples and a set of models with similar training performance,
can we understand how these models will perform on OOD test data? We benchmark
the performance on this task when looking at model accuracy on the few-shot
examples, then investigate how to incorporate analysis of the models' behavior
using feature attributions to better tackle this problem. Specifically, we
explore a set of "factors" designed to reveal model agreement with certain
pathological heuristics that may indicate worse generalization capabilities. On
textual entailment, paraphrase recognition, and a synthetic classification
task, we show that attribution-based factors can help rank relative model OOD
performance. However, accuracy on a few-shot test set is a surprisingly strong
baseline, particularly when the system designer does not have in-depth prior
knowledge about the domain shift.
Prasann Singhal, Jarad Forristal, Xi Ye, Greg Durrett
2022-10-13