Using Sampling to Estimate and Improve Performance of Automated Scoring Systems with Guarantees

11/17/2021
by   Yaman Kumar Singla, et al.
18

Automated Scoring (AS), the natural language processing task of scoring essays and speeches in an educational testing setting, is growing in popularity and being deployed across contexts from government examinations to companies providing language proficiency services. However, existing systems either forgo human raters entirely, thus harming the reliability of the test, or score every response by both human and machine thereby increasing costs. We target the spectrum of possible solutions in between, making use of both humans and machines to provide a higher quality test while keeping costs reasonable to democratize access to AS. In this work, we propose a combination of the existing paradigms, sampling responses to be scored by humans intelligently. We propose reward sampling and observe significant gains in accuracy (19.80 increase on average) and quadratic weighted kappa (QWK) (25.60 with a relatively small human budget (30 The accuracy increase observed using standard random and importance sampling baselines are 8.6 system's model agnostic nature by measuring its performance on a variety of models currently deployed in an AS setting as well as pseudo models. Finally, we propose an algorithm to estimate the accuracy/QWK with statistical guarantees (Our code is available at https://git.io/J1IOy).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset