How to better track cyber hate: AI to the rescue
The widescale use of social media, sometimes under cover of anonymity, has liberated speech and led to a proliferation of ideas, discussions and opinions on the internet. It has also led to a flood of hateful, sexist, racist and abusive speech. Confronted with this phenomenon, more and more platforms today are using automated solutions to combat cyber hate. These solutions are based on algorithms that can also introduce biases, sometimes discriminating against certain communities, and are still largely perfectible. In this context, French researchers are developing ever more efficient new models to detect hate speech and reduce the bias.
On September 16 this year, internet users launched a movement calling for a one-day boycott of Instagram. Supported by many American celebrities, the “Stop Hate for Profit” day aimed to challenge Facebook, the mother company of the photo and video sharing app, on the proliferation of hate, propaganda and misinformation on its platforms. Back in May 2019, during its bi-annual report on the state of moderation on its network, Facebook announced significant progress in the automated detection of hate content. According to the company, between January and April 2019, more than 65% of these messages were detected and moderated before users even reported them, compared with 38% during the same period in 2018.
Strongly encouraged to combat online hate content, in particular by the “Avia law” (named after the member of parliament for Paris, Lætitia Avia), platforms use various techniques such as detection by keywords, reporting by users and solutions based on artificial intelligence (AI). Machine learning allows predictive models to be developed from corpora of data. This is where biases can be damaging. “We realized that the automated tools themselves had biases against gender or the user’s identity and, most importantly, had a disproportionately negative impact on certain minority groups such as Afro-Americans,” explains Marzieh Mozafari, PhD student at Télécom SudParis. On Twitter, for example, it is difficult for AI-based programs to take into account the social context of tweets, the identity and dialect of the speaker and the immediate context of the tweet all at once. Some content is thus removed despite being neither hateful nor offensive.
So how can we minimize these biases and erroneous detections without creating a form of censorship? Researchers at Télécom SudParis have been using a public dataset collected on Twitter, distinguishing between tweets written in Afro-American English (AAE) and Standard American English (SAE), as well as two reference databases that have been annotated (sexist, racist, hateful and offensive) by experts and through crowdsourcing. “In this study, due to the lack of data, we mainly relied on cutting-edge language processing techniques such as transfer learning and the BERT language model, a pre-trained, unsupervised model”, explain the researchers.
Developed by Google, the BERT (Bidirectional Encoder Representations from Transformers) model uses a vast corpus of textual content, containing, among other things, the entire content of the English version of Wikipedia. “We were able to “customize” BERT [1] to make it do a specific task, to adjust it for our hateful and offensive corpus”, explains Reza Farahbakhsh, a researcher in data science at Télécom SudParis. To begin with, they tried to identify word sequences in their datasets that were strongly correlated with a hateful or offensive category. Their results showed that tweets written in AAE were almost 10 times more likely to be classed as racist, sexist, hateful or offensive compared to tweets written in SAE. “We therefore used a reweighting mechanism to mitigate biases based on data and algorithms,” says Marzieh Mozafari. For example, the number of tweets containing “n*gga” and “b*tch” is 35 times higher among tweeters in AAE than in SAE and these tweets will often be wrongly identified as racist or sexist. However, this type of word is common in AAE dialects and is used in everyday conversation. It is therefore likely that they will be considered hateful or offensive when they are written in SAE by an associated group.
“In fact, these biases are also cultural: certain expressions considered hateful or offensive are not so within a certain community or in a certain context. In French, too, we use certain bird names to address our loved ones! Platforms are faced with a sort of dilemma: if the aim is to perfectly identify all hateful content, too great a number of false detections could have an impact on users’ “natural” ways of expressing themselves,” explains Noël Crespi, a researcher at Télécom SudParis. After reducing the effect of the most frequently used words in the training data through the reweighting mechanism, this probability of false positives was greatly reduced. “Finally, we transmitted these results to the pre-trained BERT model to refine it even further using new datasets,” says the researcher.
Can automatic detection be scaled up?
Despite these promising results, many problems still need to be solved in order to better detect hate speech. These include the possibility of deploying these automated tools for all languages spoken on social networks. This issue is the subject of a data science challenge launched for the second consecutive year: the HASOC (Hate Speech and Offensive Content Identification in Indo-European Languages), in which a team from IMT Mines d’Alès is participating. “The challenge aims to accomplish three tasks: determine whether or not content is hateful or offensive, classify this content into one of three categories: hateful, offensive or obscene, and identify whether the insult is directed towards an individual or a specific group,” explains Sébastien Harispe, a researcher at IMT Mines Alès.
“We are mainly focusing on the first three tasks. Using our expertise in natural language processing, we have proposed a method of analysis based on supervised machine learning techniques that take advantage of examples and counter-examples of classes to be distinguished.” In this case, the researchers’ work focuses on small datasets in English, German and Hindi. In particular, the team is studying the role of emojis, some of which can have direct connotations with hate expressions. The researchers have also studied the adaptation of various standard approaches in automatic language processing in order to obtain classifiers able to efficiently exploit such markers.
They have also measured their classifiers’ ability to capture these markers, in particular through their performance. “In English, for example, our model was able to correctly classify content in 78% of cases, whereas only 77% of human annotators initially agreed on the annotation to be given to the content of the data set used,” explains Sébastien Harispe. Indeed, in 23% of cases, the annotators expressed divergent opinions when confronted with dubious content that probably needed to have been studied with account taken of the contextual elements.
What can we expect from AI? The researcher believes we are faced with a complex question: what are we willing to accept in the use of this type of technology? “Although remarkable progress has been made in almost a decade of data science, we have to admit that we are addressing a young discipline in which much remains to be developed from a theoretical point of view and, especially, for which we must accompany the applications in order to allow ethical and informed uses. Nevertheless, I believe that in terms of the detection of hate speech, there is a sort of glass roof created by the difficulty of the task as it is translated in our current datasets. With regard to this particular aspect, there can be no perfect or flawless system if we ourselves cannot be perfect.”
Besides the multilingual challenge, the researchers are facing other obstacles such as the availability of data for model training and the evaluation of results, or the difficulty in assessing the ambiguity of certain content, due for example to variations in writing style. Finally, the very characterization of hate speech, subjective as it is, is also a challenge. “Our work can provide material for the humanities and social sciences, which are beginning to address these questions: why, when, who, what content? What role does culture play in this phenomenon? The spread of cyber hate is, at the end of the day, less of a technical problem than a societal one” says Reza Farahbakhsh.
[1] M. Mozafari, R. Farahbakhsh, N. Crespi, “Hate Speech Detection and Racial Bias Mitigation in Social Media based on BERT model”, PLoS ONE 15(8): e0237861. https://doi.org/10.1371/journal.pone.0237861
Anne-Sophie Boutaud
Also read on I’MTech
Trackbacks & Pingbacks
[…] Mieux traquer la cyber-haine : l’IA à la rescousse […]
[…] Mieux traquer la cyber-haine : l’IA à la rescousse […]
Leave a Reply
Want to join the discussion?Feel free to contribute!