The dystopian rise of videogame censorship

Call of Duty players will soon be subjected to AI-driven speech surveillance.

Thomas Osborne

Topics Free Speech USA World

This is a bit of random text from Kyle to test the new global option to add a message at the top of every article. This bit is linked somewhere.

Online censorship is now coming for gamers. Videogame publisher Activision has announced that it will introduce an invasive new system of online censorship to its hugely popular Call of Duty franchise.

In a blog post last month, Activision set out the details of its new partnership with the artificial-intelligence company, Modulate. Their aim is to deliver ‘real-time voice chat moderation, at-scale’ in online gaming matches. Apparently, this is needed to stamp out the allegedly rampant menace of ‘toxic’ behaviour among gamers.

Of course, online gaming is not all sweetness and light. Call of Duty titles allow gamers to shoot and kill each other in online battlefields. Much of its success can be attributed to its online matchmaking system, which also allows players to easily communicate with each other, both within and between matches. And yes, part of the fun of this is that gamers often insult and trash talk each other. While it is true that some players can be vile and bigoted at times, this kind of behaviour is normally met with insults in kind. The worst verbal bruisings tend to be reserved for those eejits who deserve them.

In an attempt to get rid of the most abusive players, Activision is taking some incredibly draconian steps that will limit the free speech of all Call of Duty fans. Modulate’s pro-active voice-moderation tool – ToxMod – will scan and supposedly identify an increasingly broad, and endlessly vague, spectrum of ‘toxic speech’. So-called hate speech, discriminatory language, and harassment are all listed as forbidden, for instance. It will not only report what words players’ use to Activision, but also the tone and intent behind their words.

The tool is set to go online next month. For now, the AI itself will not have the power to ban players from Call of Duty games. Those decisions will still be taken by human moderators. But a system that makes censorship more ‘efficient’, by flagging up supposedly toxic speech in real time, is still a dystopian prospect.

Attempts to curb ‘toxicity’ in gaming are nothing new. Players have long been able to manually report abuse from other players to moderators. But there has been a noticeable escalation of censorship over the past decade or so. Earlier this year, Ubisoft publishing, best known for the Far Cry franchise, announced a partnership with Northumbria Police in the UK that would unite specialist police officers with moderators to tackle ‘toxic’ gaming culture. And in 2021, Intel was widely ridiculed when it introduced Bleep, a software that allows gamers to filter out so-called hate speech, according to their propensity to be offended. Bleep features a sliding scale that lets players choose how many categories of ‘toxic’ speech they’re open to hearing, including ‘none’, ‘some’, ‘most’ or ‘all’.

Activision’s new AI censorship tool is on an altogether larger scale than these initiatives. Millions of Call of Duty players will soon have their speech monitored when the system goes online. This mass surveillance – and mass censorship – is bound to rob gaming of the spontaneity and escapism that makes it worthwhile.

Gamers need to stand up to this attack on their freedoms.

Thomas Osborne is an intern at spiked.

Picture by: Getty

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Free Speech USA World


Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today