Riot and Ubisoft partnering on research to create "more positive gaming communities"
By improving AI-based moderation tools.
Riot Games and Ubisoft have announced a partnership that'll see the companies collaborating on a research project with the aim of creating "more positive gaming communities".
As explained in a blog post on Riot's website, the project - which goes by the name Zero Harm in Comms - is the first step in a cross-industry initiative that'll see the two companies working on a database to collect in-game data.
This anonymised data - consisting of chat logs labelled by behaviours, from neutral to racist and sexist - will be used to better train AI-based preemptive moderation tools, helping them more effectively recognise and parse, and so detect and mitigate, disruptive behaviour in-game.
"With Ubisoft's wide catalog of popular games and Riot's highly competitive titles," the post explains, "the resulting database of this partnership should cover a wide range of players and use cases to better train AI systems to detect and mitigate harmful behaviour."
Riot suggests better AI systems capable of automatically detecting harmful behaviour will increasingly valuable "as games become more and more popular around the world" and the challenges of online moderation grow.
"We really recognised that this is a bigger problem than one company can solve,” Riot Games' head of tech research Wesley Kerr said in a separate post on Ubisoft's website, "and so how do we come together and start getting a good handhold on the problem we're trying to solve? How can we go after those problems, and then further push the entire industry forward?".
The Zero Harm in Comms project is said to have been in the works for around six months now, and the two companies are planning to share their learnings - as well as any potential next steps - with the broader industry next year.
Today's news follows the publication of Xbox first-ever Digital Transparency Report earlier this week, in which Microsoft also highlight the importance of automated moderation tools.