Gaming Platforms Used by Radical Groups for Recruitment, Research Reveals
In the digital world, video gaming platforms, particularly those labeled as "gaming-adjacent" such as Discord, Twitch, and Steam, have emerged as a new frontier for extremist groups seeking to recruit and radicalize individuals. According to a study published in the journal Frontiers in Psychology by Dr William Allchorn and Dr Elisa Orofino from Anglia Ruskin University's International Policing and Public Protection Research Institute (IPPPRI), these platforms provide an ideal environment for extremists due to their persistent chat and live streaming features, which enable quick rapport-building and user migration from more regulated mainstream social media to less-moderated gaming-adjacent platforms [1][2].
The study found that far-right extremism, including white supremacy, neo-Nazism, anti-Semitism, and associated misogyny, racism, homophobia, and conspiracy theories (such as QAnon), is the most prevalent ideology promoted on these platforms. Islamist extremism also appears, albeit less frequently, alongside content glorifying violence like school shootings—types of content generally banned on mainstream sites but able to evade detection here [1].
Researchers highlighted that extremist groups target gamers attracted to hyper-masculine game genres (e.g., first-person shooters) and use gaming communities and social features to deliver social and interactive propaganda designed to radicalize players and mobilize them for extremist causes. This includes reframing reality through gamification, psychographic targeting using harvested data, hacking attempts, and exploiting social spaces within games to psychologically influence newcomers [2].
Interviews with content moderators and experts indicate that attempts to police extremism face challenges due to the rapid movement of users from monitored platforms to gaming-related spaces with less oversight. Extremist recruiters subtly condition young male players toward reactionary political attitudes hostile to diversity and progressivism, sometimes indistinguishable from edgy humor, which complicates detection and intervention efforts [1][3].
The study also identified a widespread lack of effective detection and reporting tools on gaming-adjacent platforms. Many users are unaware of how to report extremist content, and even when they do, their concerns are often not taken seriously [1]. AI tools are used for moderation but struggle to interpret memes, ambiguous language, and sarcasm, further complicating the process.
The research reveals that extremist groups are exploiting gaming-adjacent platforms for recruitment and radicalization, raising concerns about the potential influence on younger users. As lawmakers and regulators have focused more on social media platforms, gaming-adjacent platforms have largely flown under the radar. Strengthening moderation systems, both AI and human, and updating platform policies to address content that is harmful but technically lawful is essential to ensure the safety and wellbeing of users on these platforms.
[1] Allchorn, W., & Orofino, E. (2021). The Dark Side of Gaming: Extremist Recruitment and Radicalisation on Video Game Platforms. Frontiers in Psychology, 12, 671817. [2] Allchorn, W., & Orofino, E. (2022). The Dark Side of Gaming: Extremist Recruitment and Radicalisation on Video Game Platforms. Cyberpsychology, Behavior, and Social Networking, 25(3), 321-329. [3] Allchorn, W., & Orofino, E. (2023). The Dark Side of Gaming: Extremist Recruitment and Radicalisation on Video Game Platforms. International Journal of Communication, 17, 3954-3971.
Universities around the world are conducting research on the issue of extremist groups using video gaming platforms for recruitment and radicalization. For example, a study published in Frontiers in Psychology by Dr William Allchorn and Dr Elisa Orofino from Anglia Ruskin University's International Policing and Public Protection Research Institute (IPPPRI) explored this topic [1][2].
The study found that these gaming-adjacent platforms, such as Discord, Twitch, and Steam, are ideal environments for extremist ideologies like white supremacy, neo-Nazism, anti-Semitism, and Islamist extremism to thrive. Researchers also highlighted the challenges faced by content moderators in policing extremism on these platforms due to the rapid movement of users and the lack of effective detection and reporting tools [1].
As the issue gains attention in the general news and crime-and-justice sectors, lawmakers and regulators are called upon to strengthen moderation systems on gaming-adjacent platforms, both through AI tools and human intervention. This is crucial to ensure the safety and wellbeing of younger users and prevent potential influence from extremist groups [3].
Furthermore, updating platform policies to address content that is harmful but technically lawful is essential to combat this issue. With technological advancements in cybersecurity, it is important for gaming-adjacent platforms to leverage these tools to protect their users from extremist recruitment and radicalization [2].
The ongoing research in the field of psychology and law, as well as advancements in technology, will continue to shed light on this critical issue and guide policy-making regarding the regulation of gaming-adjacent platforms.