Panel digest: Artificial intelligence and the spread of hate speech and propaganda

16 May 2022
On 29 April 2022, GAAMAC co-organized a panel at the MIGS Forum on Artificial Intelligence (AI) and Human Rights. Three experts gave their views on the challenges and possibilities to use AI and its legislative regulation.

AI is rapidly taking a central role in our day-to-day life. While it is making the world a smarter, faster place, it is also accompanied by emerging threats against human rights standards and other disruptive effects. AI-enabled disinformation, the resulting spread of hate speech and propaganda, as well as key norm issues and missing policy remedies, are some of the challenges associated with emerging technologies.

Moderating the panel, GAAMAC Chair Silvia Fernández de Gurmendi introduced the experts from the GAAMAC community, who each highlighted a specific aspect of the topic:

  • Rastislav Šutek, Executive Director at Platform for Peace and Humanity
  • Raashi Saxena, Global Project Coordinator at The Sentinel Project
  • Lord Clement-Jones, Member of the UK Parliament and external expert collaborating with Parliamentarians for Global Action

Speaking first, Rastislav Šutek explained the role AI plays in both spreading and targeting online propaganda. He reminded participants that AI was already used to moderate content on social media platforms. He added that although Russian authorities’ clampdown on the digital space since February 2022 was particularly visible, it was indeed common for States to ask tech companies to remove content and information – often on the grounds of national security.

Regulation, over-regulation, under-regulation?

Historically, legal regulation against hate speech and propaganda has been relatively rare, no least because a balance always has to be struck with freedom of expression. However, there is a recent revival of interest from States to regulate online content, both in regional organizations like the EU and at domestic level. The Council of Europe is also currently working on an instrument regulating the use of AI.

Rastislav Šutek ended by highlighting some downsides of AI moderation: some blocked content may be valuable for evidentiary purposes in criminal trials; the lack of cultural mitigation may result in excessive flagging of terms/content; and conversely, military jargon or coded language may elude automated monitoring.

AI is a necessary but imperfect monitoring tool

Next, Raashi Saxena took the floor to present the Hatebase Initiative. The largest structured repository of multilingual, usage-based hate speech, it attempts to better understand how online behaviour can influence offline violence, and may be used to detect the early stages of genocide.

Given the volume and dissemination speed of online content, AI is a necessary tool for effective monitoring. Human monitoring is easily overwhelmed not just by the volume, but also the nature of the content, leading to mental health issues. Hours-long exposure to hateful content also raise question about adequate compensation and working conditions.

What AI fails to do, on the other hand, is grasp the cultural nuances and associations, meaning that the same terms can be considered hateful in one region of the world and not in another. The line with freedom of expression is blurry and, since not all unpopular content amounts to hate speech, a human case-by-case arbitration is required.

To reflect socio-cultural specificities as accurately as possible, The Sentinel Project created the Hatebase Citizen’s Linguist Lab. This open-source, collaborative tool can be enriched by anybody worldwide, by adding or by scoring terms. Measuring the frequency of their usage also allows The Sentinel Project to understand the social climate of certain areas.

The role of legislators in addressing online hate speech

Last but not least, Lord Clement-Jones spoke of his experience at the UK House of Lords in adopting social media regulation to prevent and address hate speech. His first warning was that legislators were merely “catching up” on the topic, due to the very rapid evolution of technology and AI in particular.

Lord Clement-Jones argued that online and offline hate speech could not be treated separately, as illustrated by the fact that the UK Law Commission’s work on hate crimes and the Government’s Online Safety Bill are coming together for debate this year. Liability and risk assessment should be guaranteed regardless of where the hate speech is disseminated.

A particularly complex area is dis- or misinformation bearing a societal impact, in other words when the risk is not on individuals but on democracy or security. The Online Safety Bill includes compulsory risk assessment and emphasizes that the very design of social platforms must not amplify this content.

Ways forward

Silvia Fernández de Gurmendi concluded the panel by summarizing the main take-aways: the need for clearer definition of what constitutes harmful content; the importance of maintaining freedom of expression, including on a very localized, context-specific level; and the need for legislators to reflect technological advances in the law.

The  AI and Human Rights Forum is organized every year by the Montreal Institute for Genocide and Human Rights Studies (MIGS), also a GAAMAC partner. The 2022 edition was held online on 27, 28 and 29 April.

Make sure you do not miss any updates! Sign up to our bimonthly newsletter.

Recent posts