Title: Improving Statistical Significance in Human Evaluation of Automatic Metrics via Soft Pairwise Accuracy
Authors: Brian Thompson, Nitika Mathur, Daniel Deutsch, Huda Khayrallah
Published: 15th September 2024 (Sunday) @ 03:25:55
Link: http://arxiv.org/abs/2409.09598v2
Abstract
Selecting an automatic metric that best emulates human annotators is often non-trivial, because there is no clear definition of âbest emulates.â A meta-metric is required to compare the human judgments to the automatic metric scores, and metric rankings depend on the choice of meta-metric. We propose Soft Pairwise Accuracy (SPA), a new meta-metric that builds on Pairwise Accuracy (PA) but incorporates the statistical significance of both the human judgments and the metric scores. We show that SPA is more stable than PA with respect to changes in the number of systems/segments used for evaluation. We also show that PA can only assign a small set of distinct output values to metrics, and this results in many metrics being artificially assigned the exact same PA score. We demonstrate that SPA fixes this issue. Finally, we show that SPA is more discriminative than PA, producing more statistically significant comparisons between metrics. SPA was selected as the official system-level metric for the 2024 WMT Metrics Shared Task.