Statistical Methods for Auditing the Quality of Manual Content Reviews
Large technology firms face the problem of moderating content on their online platforms for compliance with laws and policies. To accomplish this at the scale of billions of pieces of content per day, a combination of human and machine review are necessary to label content. Subjective judgement and human bias are of concern to both human annotated content as well as to auditors who may be employed to evaluate the quality of such annotations in conformance with law and/or policy. To address this concern, this paper presents a novel application of statistical analysis methods to identify human error and these sources of audit risk.
READ FULL TEXT