Ubisoft and Riot Games have announced Zero Harm in Comms a new research project that aims to tone down the toxic rhetoric by using Artificial Intelligence.
Announced yesterday and aimed at reducing the harm of toxic team battles and online aggravation, this new project is something of a technological partnership focused on delivering artificial intelligence-based solutions in order to prevent harmful player interactions. What that means is that two of the world’s biggest gaming companies will collaborate to create gaming structures that foster more rewarding social experiences and avoid harmful interactions.
What the deliverable outcome is, in terms of data or software, isn’t quite defined as yet:
“Disruptive behavior isn’t a problem that is unique to games – every company that has an online social platform is working to address this challenging space. That is why we’re committed to working with industry partners like Ubisoft who believe in creating safe communities and fostering positive experiences in online spaces,” said Wesley Kerr, Head of Technology Research at Riot Games. “This project is just an example of the wider commitment and work that we’re doing across Riot to develop systems that create healthy, safe, and inclusive interactions with our games.”
That said it’s a poignant project. Riot is the studio behind League of Legends an utterly massive online MOBA that became synonymous with toxic player behaviour. Riot has been tackling this image for some time now and while community teams and in game mechanics try to foster good relations, the scale of the game’s player base can make policing toxic teammates problematic. While this announcement doesn’t make any impact on Ubisoft’s problematic corporate image, it’s always welcoming to see strides being made that help make gaming a safer space for everyone. To find out more about the Zero Harm in Games project, check out the blog on the Riot website.