Our Evaluation Metric Needs an Update to Encourage Generalization

07/14/2020
by   Swaroop Mishra, et al.
6

Models that surpass human performance on several popular benchmarks display significant degradation in performance on exposure to Out of Distribution (OOD) data. Recent research has shown that models overfit to spurious biases and `hack' datasets, in lieu of learning generalizable features like humans. In order to stop the inflation in model performance – and thus overestimation in AI systems' capabilities – we propose a simple and novel evaluation metric, WOOD Score, that encourages generalization during evaluation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset