Cyber Hate Classification: 'Othering' Language And Paragraph Embedding
Hateful and offensive language (also known as hate speech or cyber hate) posted and widely circulated via the World Wide Web can be considered as a key risk factor for individual and societal tension linked to regional instability. Automated Web-based hate speech detection is important for the observation and understanding trends of societal tension. In this research, we improve on existing research by proposing different data mining feature extraction methods. While previous work has involved using lexicons, bags-of-words or probabilistic parsing approach (e.g. using Typed Dependencies), they all suffer from a similar issue which is that hate speech can often be subtle and indirect, and depending on individual words or phrases can lead to a significant number of false negatives. This problem motivated us to conduct new experiments to identify subtle language use, such as references to immigration or job prosperity in a hateful context. We propose a novel 'Othering Lexicon' to identify these subtleties and we incorporate our lexicon with embedding learning for feature extraction and subsequent classification using a neural network approach. Our method first explores the context around othering terms in a corpus, and identifies context patterns that are relevant to the othering context. These patterns are used along with the othering pronoun and hate speech terms to build our 'Othering Lexicon'. Embedding algorithm has the superior characteristic that the similar words have a closer distance, which is helpful to train our classifier on the negative and positive classes. For validation, several experiments were conducted on different types of hate speech, namely religion, disability, race and sexual orientation, with F-measure scores for classifying hateful instances obtained through applying our model of 0.93, 0.95, 0.97 and 0.92 respective.
READ FULL TEXT