With a Little Help from the Authors: Reproducing Human Evaluation of an MT Error Detector

08/12/2023
by   Ondřej Plátek, et al.
0

This work presents our efforts to reproduce the results of the human evaluation experiment presented in the paper of Vamvas and Sennrich (2022), which evaluated an automatic system detecting over- and undertranslations (translations containing more or less information than the original) in machine translation (MT) outputs. Despite the high quality of the documentation and code provided by the authors, we discuss some problems we found in reproducing the exact experimental setup and offer recommendations for improving reproducibility. Our replicated results generally confirm the conclusions of the original study, but in some cases, statistically significant differences were observed, suggesting a high variability of human annotation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset