Implicit Hate Speech – Non-Negative Stereotypes About Identity Groups
March 25th at 11:45 – 13:15 in S.1.05
Univ.-Ass. Tina Lommel, Bakk. MA (D!ARC / Computational Linguistics)
Abstract:
Hate speech is not always overtly offensive — it often appears in subtle, implicit forms. One specific example is seemingly neutral or even positive statements about identity groups that still reinforce stereotypes or convey discriminatory meanings (e.g., Women make good cooks). This presentation explores this form of implicit hate speech and introduces a newly developed dataset that systematically captures such statements. The impact of different linguistic formulations on the perception of offensiveness is examined, highlighting when seemingly harmless statements may be considered problematic. Additionally, the shifting nature of stereotypes depending on context is analyzed, illustrating how their perceived offensiveness varies. Finally, approaches to the automated detection of such linguistic patterns are discussed, along with the challenges involved.
Tina Lommel is a PhD candidate in Computational Linguistics and works at the Digital Age Research Center at the University of Klagenfurt. Her research focuses on improving the automatic detection of hate speech and implicit hate speech.