Artificial Intelligence, Extreme Speech, and the Challenges of Online Content Moderation

Online extreme speech has emerged as a significant challenge for democratic societies worldwide. Governments, companies, and academic researchers have responded to this phenomenon by increasingly turning to Artificial Intelligence (AI) as a potential tool that can detect, decelerate, and remove online extreme speech. In this policy brief, we will outline the challenges facing AI-assisted content moderation efforts, and how the collaborative coding framework proposed by the ERC Proof-of-Concept project “AI4Dignity” offers a way to address some of the pertinent issues concerning AI deployment for content moderation. The policy brief will provide a short review of state regulations and corporate practices around AI and content moderation, highlight existing challenges, discuss what lessons can be learned from ongoing efforts, and underline what new areas and questions are to be charted on priority. In the current context where the excitement around AI’s capacities has run up against anxieties about the development and deployment of the technology, this policy brief will propose ways to develop context-sensitive frameworks for AI-assisted content moderation that is centered around human collaboration. We recommend that social media companies and governments should institutionalize the practice of reaching out to communities and bringing feedback to bear on the future development of AI-assisted content moderation. Beyond company practices, collaborative models for identifying extreme speech that are independent of corporate and government spaces need to be fostered and supported.

Download PDF

To cite: Udupa, Sahana, Elonnai Hickok, Antonis Maronikolakis, Hinrich Schuetze, Laura Csuka, Axel Wisiorek, Leah Nann. 2021. “Artificial Intelligence, Extreme Speech, and the Challenges of Online Content Moderation”. AI4Dignity Project. https://doi.org/10.5282/ubm/epub.76087.

0 Comments

Leave a Reply