Diverse Misinformation: Impacts of Human Biases on Detection of Deepfakes on Networks
Social media users are not equally susceptible to all misinformation. We call “diverse misinformation” the complex relationships between human biases and demographics represented in misinformation, and its impact on our susceptibility to misinformation is currently unknown. To investigate how users' biases impact susceptibility, we explore computer-generated videos called deepfakes as a type of diverse misinformation. We chose deepfakes as a case study for three reasons: 1.) their classification as misinformation is more objective; 2.) we can control the demographics of the persona presented; and 3.) deepfakes are a real-world concern with associated harms that need to be better understood. Our paper presents a survey (N=2,000) where U.S.-based participants are exposed to videos and asked questions about their attributes, not knowing they might be a deepfake. Our analysis investigates the extent to which different users are duped and by what perceived demographics of deepfake personas. First, if users not explicitly looking for deepfakes are not particularly accurate classifiers. Importantly, accuracy varies significantly by demographics, and participants are generally better at classifying videos that match them (especially male, white, and young participants). We extrapolate from these results to understand the population-level impacts of these biases using an idealized mathematical model of the interplay between diverse misinformation and crowd correction. Our model suggests that a diverse set of contacts might provide “herd correction” where friends can protect each other's blind spots. Altogether, human biases and the attributes of misinformation matter greatly, but having a diverse social group may help reduce susceptibility to misinformation.
READ FULL TEXT