EU Code of Conduct against illegal hate speech online: results remain positive but progress slows down
Today the European Commission has released the results of its sixth evaluation of the Code of Conduct on countering illegal hate speech online. The results show a mixed picture as IT companies reviewed 81% of the notifications within 24 hours and removed an average of 62.5% of flagged content. These results are lower than the average recorded in 2019 and 2020. While some companies have improved, results for others have clearly worsened. As in previous monitoring rounds, a main weakness remains insufficient feedback to users’ notifications. Finally, a novelty in this year’s evaluation is the information provided by IT companies about measures they have taken to counter hate speech, including actions to automatically detect such content.
Věra Jourová, Vice-President for Values and Transparency, said: ”Hate speech online can lead to real harm offline. Violence often starts with words. Our unique Code has brought good results but the platforms cannot let the guard down and need to address the gaps. And gentlemen agreement alone will not suffice here. The Digital Services Act will provide strong regulatory tools to fight against illegal hate speech online.”
Didier Reynders, Commissioner for Justice, added: “The results show that IT companies cannot be complacent: just because the results were very good in the last years, they cannot take their task less seriously. They have to address any downward trend without delay. It is matter of protecting a democratic space and fundamental rights of all users. I trust that a swift adoption of the Digital Services Act will also help solving some of the persisting gaps, such as the insufficient transparency and feedback to users.”
The sixth evaluation shows that on average:
IT companies assessed 81% of the notifications in less than 24 hours, which is lower than the 2020’s average of 90,4%.
IT companies removed 62,5% of the content notified to them, which is lower than the average of 71% recorded in 2019 and 2020.
Removal rates varied depending on the severity of hateful content. 69% of content calling for murder or violence against specific groups was removed, while 55% of the content using defamatory words or pictures aiming at certain groups was removed. Conversely, in 2020, the respective results were 83,5% and 57.8%.
IT companies gave feedback to 60,3% of the notifications received, which is lower than during the previous monitoring exercise (67.1%).
In this monitoring exercise, sexual orientation is the most commonly reported ground of hate speech (18,2%) followed by xenophobia (18%) and anti-gypsyism (12.5%).
For the first time, the IT companies reported detailed information about measures taken to counter hate speech outside the monitoring exercise, including their actions to automatically detect and remove content.
Next steps
The Commission will continue monitoring the implementation of the Code of Conduct. The Commission calls upon IT companies to reinforce the dialogue with trusted flaggers and Civil Society Organisations to address the gaps in reviewing notifications, taking action and to improve their feedback to users. The Digital Services Act (DSA) proposes a comprehensive legal framework for countering illegal content as well as a co-regulatory system that supposes initiatives such as the Code of conduct. The Commission aims to discuss how the Code could evolve with the IT Companies, also in light of the upcoming obligations and the collaborative framework in the proposal for a Digital Services Act.