Title: Quantifying the Plausibility of Context Reliance in Neural Machine Translation
Authors: Gabriele Sarti, Grzegorz Chrupała, Malvina Nissim, Arianna Bisazza
Published: 2nd October 2023 (Monday) @ 13:26:43
Link: http://arxiv.org/abs/2310.01188v2
Abstract
Establishing whether language models can use contextual information in a human-plausible way is important to ensure their trustworthiness in real-world settings. However, the questions of when and which parts of the context affect model generations are typically tackled separately, with current plausibility evaluations being practically limited to a handful of artificial benchmarks. To address this, we introduce Plausibility Evaluation of Context Reliance (PECoRe), an end-to-end interpretability framework designed to quantify context usage in language models’ generations. Our approach leverages model internals to (i) contrastively identify context-sensitive target tokens in generated texts and (ii) link them to contextual cues justifying their prediction. We use \pecore to quantify the plausibility of context-aware machine translation models, comparing model rationales with human annotations across several discourse-level phenomena. Finally, we apply our method to unannotated model translations to identify context-mediated predictions and highlight instances of (im)plausible context usage throughout generation.
See 🐑🐑 PECoRe @ ICLR 2024 - Resources for the paper “Quantifying the Plausibility of Context Reliance in Neural Machine Translation” (Sarti et al. 2024) published in ICLR 2024 put together by Gabriele Sarti
Terrible pun.