Curbing Online Abuse Isn't Impossible. Here's Where We Start

Riot Games, makers of League of Legends, tried to combat harassment by analyzing behavioral profiles for tens of millions of users.

Edel Rodriguez

“Fucking dumb bitch,” the message began, then went on to detail the manner in which Jenny Haniver should be sexually assaulted and murdered. Haniver (her gaming name, not her real one) found it in the voicemail of her Xbox account, left by a male competitor in the online combat game Call of Duty: Modern Warfare 3. For Haniver, this was far from an isolated incident. In another match, after an opponent asked if she was menstruating and opined that “girls” played videogames only for attention, he left Haniver a voicemail that said, “I’m gonna impregnate you with triplets and then make you have a very late-term abortion.” For three and a half years, Haniver has kept track of the invective heaped on her in multiplayer games, posting some 200 incidents on her blog so far.

Haniver, of course, is not alone—harassment on the Internet is ubiquitous, particularly for women. In a 2013 Pew Research survey, 23 percent of people ages 18 to 29 reported being stalked or harassed online; advocacy groups report that around 70 percent of the cases they deal with involve female victims, and one study of online gaming found players with female voices received three times as many negative responses as men.

Too often, though, we talk about online abuse like we talk about bad weather: We shake our heads, shrug, and assume there’s nothing we can do. The behavior is so prevalent that it’s seen as an inextricable part of online culture. As a widely read article in January’s Pacific Standard noted, “Internet harassment is routinely dismissed as ‘harmless locker-room talk,’ perpetrators as ‘juvenile pranksters,’ and victims as ‘overly sensitive complainers.’” What else, in other words, would you expect from the Internet? But the Internet is now where we socialize, where we work. It’s where we meet our spouses, where we build our reputations. Online harassment isn’t just inconvenient, nor is it something we can walk away from with ease. It’s abhorrent behavior that has real social, professional, and economic costs. And the big social networks where most Americans spend time online—Facebook, YouTube, Twitter, and the rest—aren’t doing nearly enough to address the problem.

The good news, though, is that Internet harassment can be combatted and reduced. While the problem is far from solved, a few online communities—especially in the world of multiplayer gaming, which has long struggled with issues of incivility and abuse—have come up with some innovative techniques to deter harassers and sometimes even reform them. If Facebook and the other social networks were to take a page from these approaches, they could make huge strides in turning the Internet into a less toxic place for everyone. But embracing their lessons would also require a whole new way of thinking about online behavior.

Boasting more than 67 million active players each month, the battle-arena game League of Legends is perhaps the most popular videogame in the world. But two years ago its publisher, Riot Games, noticed that a significant number of players had quit the game and cited noxious behavior as the reason. In response, the company assembled a “player behavior team,” bringing together staff members with PhDs in psychology, cognitive science, and neuroscience to study the issue of harassment by building and analyzing behavioral profiles for tens of millions of users.

Courtesy of Sarkeesian

This process led them to a surprising insight—one that “shaped our entire approach to this problem,” says Jeffrey Lin, Riot’s lead designer of social systems, who spoke about the process at last year’s Game Developers Conference. “If we remove all toxic players from the game, do we solve the player behavior problem? We don’t.” That is, if you think most online abuse is hurled by a small group of maladapted trolls, you’re wrong. Riot found that persistently negative players were only responsible for roughly 13 percent of the game’s bad behavior. The other 87 percent was coming from players whose presence, most of the time, seemed to be generally inoffensive or even positive. These gamers were lashing out only occasionally, in isolated incidents—but their outbursts often snowballed through the community. Banning the worst trolls wouldn’t be enough to clean up League of Legends, Riot’s player behavior team realized. Nothing less than community-wide reforms could succeed.

Some of the reforms Riot came up with were small but remarkably effective. Originally, for example, it was a default in the game that opposing teams could chat with each other during play, but this often spiraled into abusive taunting. So in one of its earliest experiments, Riot turned off that chat function but allowed players to turn it on if they wanted. The impact was immediate. A week before the change, players reported that more than 80 percent of chat between opponents was negative. But a week after switching the default, negative chat had decreased by more than 30 percent while positive chat increased nearly 35 percent. The takeaway? Creating a simple hurdle to abusive behavior makes it much less prevalent.

The team also found that it’s important to enforce the rules in ways that people understand. When Riot’s team started its research, it noticed that the recidivism rate was disturbingly high; in fact, based on number of reports per day, some banned players were actually getting worse after their bans than they were before. At the time, players were informed of their suspension via emails that didn’t explain why the punishment had been meted out. So Riot decided to try a new system that specifically cited the offense. This led to a very different result: Now when banned players returned to the game, their bad behavior dropped measurably.

All of these tactics helped League of Legends redefine its community norms, the shared beliefs about how people are expected to behave. It’s what we express when we teach children about “inside” and “outside” voices, and why you can yell profanity at your video­game but not at your boss.

Shouting racial slurs and rape threats at someone in public often has consequences, but on the Internet—at least in communities unable or unwilling to enforce civilized social norms—it almost never does. For example, Twitter does not consider a threat against another user a violation of its terms of service unless that threat is “direct and specific.” As one of the company’s PR representatives, Jim Prosser, explains, “It’s not just that something should happen to you; it’s that something is going to happen to you. Where it will happen, from what, with what. Rather than just ‘I hate you, go die in a fire.’ You have something more specific there.”

The problem, of course, is that telling a woman you want to rape and kill her—or even that you merely hope she gets raped and killed—tends to silence her and drive her offline, even if you fail to specify that you’ll use the candlestick in the conservatory. “I reported a threat that said, ‘I will rape you when I get the chance,’” says Anita Sarkeesian, a media critic who has been attacked repeatedly by cybermobs. “I got a response from Twitter stating, ‘The reported account is currently not in violation of the Twitter Rules at this time.’ They continued to suggest that this tweet doesn’t ‘meet the criteria of an actionable threat.’ So according to Twitter, rape threats are only a problem if women can prove beyond a shadow of a doubt that an attack will occur? That’s ridiculous.”

Really, freedom of speech is beside the point. Facebook and Twitter want to be the locus of communities, but they seem to blanch at the notion that such communities would want to enforce norms—which, of course, are defined by shared values rather than by the outer limits of the law. Social networks could take a strong and meaningful stand against harassment simply by applying the same sort of standards in their online spaces that we already apply in our public and professional lives. That’s not a radical step; indeed, it’s literally a normal one. Wishing rape or other violence on women or using derogatory slurs, even as “jokes,” would never fly in most workplaces or communities, and those who engaged in such vitriol would be reprimanded or asked to leave. Why shouldn’t that be the response in our online lives?

To truly shift social norms, the community, by definition, has to get involved in enforcing them. This could mean making comments of disapproval, upvoting and downvoting, or simply reporting bad behavior. The best online forums are the ones that take seriously their role as communities, including the famously civil MetaFilter, whose moderation is guided by a “don’t be an asshole” principle. On a much larger scale, Microsoft’s Xbox network implemented a community-powered reputation system for its new Xbox One console. Using feedback from players, as well as a variety of other metrics, the system determines whether a user gets rated green (“Good Player”), yellow (“Needs Improvement”), or red (“Avoid Me”).

In another initiative by Riot’s player- behavior team, League of Legends launched a disciplinary system called the Tribunal, in which a jury of fellow players votes on reported instances of bad behavior. Empowered to issue everything from email warnings to longer-term bans, users have cast tens of millions of votes about the behavior of fellow players. When Riot asked its staff to audit the verdicts, it found that the staff unanimously agreed with users in nearly 80 percent of cases. And this system is not just punishing players; it’s rehabilitating them, elevating more than 280,000 censured gamers to good standing. Riot regularly receives apologies from players who have been through the Tribunal system, saying they hadn’t understood how offensive their behavior was until it was pointed out to them. Others have actually asked to be placed in a Restricted Chat Mode, which limits the number of messages they can send in games—forcing a choice to communicate with their teammates instead of harassing others.

A telling example is Riot’s most famous and high-profile punishment thus far, which targeted a player in the professional League of Legends community (where top performers can pull down six figures a year). After repeated Tribunal punishments, and with a harassment score placing him among the 0.7 percent worst North American players, Christian Rivera was banned for a year from competitive play. Even more telling was Rivera’s later epiphany: “It took Riot’s interjection for me to realize that I could be a positive influence, not just in League but with everything,” he said in a subsequent interview. “I started to enjoy the game more, this time not at anyone’s expense.”

What would our social networks look like if their guidelines and enforcement reflected real-life community norms? If Riot’s experiments are any guide, it’s unlikely that most or even many users would deem a lot of the casual abuse, the kind that’s driving so many people out of online spaces, to be acceptable. Think about how social networks might improve if—as on the gaming sites and in real life—users had more power to reject abusive behavior. Of course, different online spaces will require different solutions, but the outlines are roughly the same: Involve users in the moderation process, set defaults that create hurdles to abuse, give clearer feedback for people who misbehave, and—above all—create a norm in which harassment simply isn’t tolerated.

Ultimately, online abuse isn’t a technological problem; it’s a social problem that just happens to be powered by technology. The best solutions are going to be those that not only defuse the Internet’s power to amplify abuse but also encourage crucial shifts in social norms, placing bad behavior beyond the pale. When people speak up about online harassment, one of the most common responses is “Well, what did you expect from the Internet?” If we truly want to change our online spaces, the answer from all of us has got to be: more.