No Matter What Reddit Does, It's Going to Alienate People

The wisdom of the crowd was supposed to reign at Reddit. But often, the crowd turned into a mob.
redditalienbluestory
Reddit

User-moderated social platforms promise the best of the Internet: democratic, self-regulating online communities where everyone has an equal voice. But the reality often is far different. They often showcase the Internet at its worst. Harassment and trolling abound, and enforcement of community standards can seem non-existent.

Few sites embody this dichotomy more than Reddit. The site was founded on the principles of openness and democratic process as a place where the wisdom of the crowd would reign. Too often, however, the crowd turns into a mob. Now that Reddit is cracking down, many in the crowd are angry.

Last month, the site announced an update its anti-harassment policies to explicitly ban systematic attacks against individual users. Last week, Reddit (in which WIRED parent company Advance Publications owns a stake) took its first actions to enact those policies. On Wednesday, Reddit banned five forums, shuttering r/fatpeoplehate (which, as its name suggests, gathered comments about fat people and had accumulated over 150,000 subscribers), and four other less trafficked hate forums.

"We want to be open about our involvement: We will ban subreddits that allow their communities to use the subreddit as a platform to harass individuals when moderators don’t take action," Reddit wrote in a post announcing the bans. “We’re banning behavior, not ideas.”

A company spokeswoman further explained to WIRED that members of the five subreddits were harassing people on and off Reddit, following individuals to harass them on other sites including Twitter and Facebook, as well as via email and more.

The response from the Reddit community was swift and furious. "I predict this isn't going to go down well," one commenter wrote in response to the announcement. "What happened to this?" asked another, pointing to a section on Reddit’s Wikipedia page that describes Reddit's free speech ideals. (It reads, in part, "Reddit does not ban communities solely for featuring controversial content. … It is not Reddit’s place to censor its users.")

Some users blamed interim CEO Ellen Pao for the changes, seeking to tie the new standards to Pao's lawsuit against her former employer, venture capital firm Kleiner Perkins Caufield and Byers, whom she sued for gender discrimination. Others said banning isn't necessary because offended users could just not look at what they didn't want to see.

"I’m overweight and frequently offended by FPH on Reddit, so I blocked it," said another Redditor. "It being banned is ridiculous."

Human Versus Machine

In many ways, the reaction wasn’t surprising. Over the past 10 years, Reddit has built its reputation as a purely user-driven community, a place where people, not algorithms, determine what rises to the top of the feed. For so long, the quality that has defined the community has been its rigid commitment to an ideal of free speech and, in step with that, its hands-off approach to moderating content. Occasional trolls and morally questionable content, Reddit’s staunchest advocates would say, are the unfortunate but unavoidable byproduct of upholding those values. And if those values are not to be compromised, the thinking goes, some offenses must be tolerated. By this view, Reddit's sudden switch to a a more top-down form of moderation feels like a kind of intolerance.

More broadly, Reddit’s people-centric approach feels rare at a time when tech companies are increasingly acting as our stewards of content. Just last Monday, Apple showed off its new News app, joining the ranks of tech giants who now act as publishers and content curators. Snapchat recently launched a media hub called Discover, while Facebook surfaces news in users' feeds based on its algorithmic determinations of what users are expected to like the most. Google uses its own secret algorithms to determine which sources to surface first in Google News and its broader search results.

Among the big players, only Twitter can reasonably claim that it still embraces a mainly user-driven approach. The metrics that indicate a tweet's popularity—retweets and favorites—are derived from human interactions. Users might look to how many followers a fellow tweeter has, or the presence of a blue "Verified" badge indicating an influential person in a particular area, to decide whose links are worthy of their attention.

And yet on Twitter, like Reddit, harassment and trolling are rampant. On platforms meant to be moderated by users, it's all too easy to hide behind the cloak of anonymity that enable conduct few people would own up to if they had to show their faces.

"There’s no accountability, and in an anything-goes environment, there are no social norms on how to slow down harassing speech," says Danielle Citron, a law professor at the University of Maryland and the author of Hate Crimes in Cyberspace.

Censorship Versus Standards

Both Reddit and Twitter have sought to balance the need to quell harassment with the desire to maintain user freedom by relying on users themselves to report abuse. But a recent report has shown it’s easy to game Twitter’s systems for policing harassment. And Reddit’s self-reporting systems often seem to fall short.

Citron, at least, thinks these platforms and others have not yet used what she calls their “enormous power” to shield vulnerable communities. Real censorship, she points out, would take the form of a government ban of content on all outlets, which is the opposite of what's happening in the wake of Reddit's new bans. As a form of protest, thousands of Redditors have migrated offending subreddits onto a competing service called Voat, which bills itself as a forum founded as a response to perceived Reddit censorship.

But even as some Reddit users cry foul over banning content, others are turning away from the site because of the content itself. In Reddit’s own survey of thousands of its users, the company found that 50 percent of Redditors who wouldn’t recommend the site to their friends said it was because of concerns over exposing them to “unpleasant content and users” and “appearing to support or participate in such content by association.”

And so Reddit seems to be at an impasse. If it allows harassers and trolls to stay, it risks sending the rest of its community fleeing. If it bans some content, it's lambasted for violating its ideals. Its latest attempt to split the difference appears to be the kind of compromise that, in classic democratic fashion, ends up leaving no one happy. But that may be the price Reddit has to pay if it doesn't want to turn its ideals over to the machines.

Correction 3:45 PM EST 06/16/2015: An earlier version of this article incorrectly stated that in Reddit's survey of its users, 50 percent of Redditors wouldn't recommend the site to friends because of "unpleasant content and users." The article has been updated to correct that 50 percent of Redditors surveyed who wouldn’t recommend the site to their friends said it was because of those concerns.