Valorant begins saving voice communications to mitigate abuse

Riot has announced that it has updated its privacy policy for all online games so that it can begin storing voice data. The online multiplayer game Valorant first verifies that the new system is actually working. Riot says it will begin moderating voice communications to validate reports of abusive and toxic behavior.

TechCrunch’s report details the voice moderation system Riot plans to implement. Audio data is stored by region and pulled when the report is submitted. According to the riots, the audio is evaluated to check for violations of the Code of Conduct, and if a violation occurs, the player in question has the opportunity to see it. After that, the recording will be deleted. If no violation is found, the audio will also be deleted.

Riot told TechCrunch that a system for monitoring voice communications is still under development and could take the form of a voice-to-text transcription system or perhaps machine learning. Modulate’s ToxMod software already has the ability to “listen” to human utterances and recognize specific words, phrases, or common abusive languages, and Riot offers a similar AI-driven solution for voice moderation. May be used.

Brave executive producer Anna Donlon states that abusive behavior is the “major problem” of competitive online games.

“If you don’t know that, you may not be suffering from in-game abuse that many are suffering from,” she wrote in today’s tweet. “I read and listen to the actions people report. I hear it myself in the game. Stop telling me” just mute “. What about abusers “just mute” themselves? This is a meaningful step and one of the many steps we all have to take. “

{“schema”: {“page”: {“content”: {“headline”: “Valorant begins to store voice communications in moderate abuse”, “type”: “post”, “category” : “Valorant”}, “user”: {“loginstatus”: false}, “game”: {“publisher”: “Riot Games”, “genre”: null, “title”: “Valorant”, “genres”: null}}}}

Back to top button