Trolls and Consequences: What Twitter needs to learn from Reddit
Reddit and Twitter are both large, anonymous social networks, but it’s only Twitter that is notorious for venomous abuse campaigns, rape threats, and death threats, often directed against prominent women, LGBTI people, and racial minorities.
What is it about Reddit that keeps people safe from the trolls?
The toxicity of Twitter
In 2013, a British feminist and journalist campaigned against the Bank of England’s plan to replace the only non-Queen female on a UK banknote with a man. She received 50 sexually abusive tweets per hour, as well as rape and death threats. Twitter reacted by adding a button to report abuse in response to a petition. Did it stop Twitter harassment?
In 2016, actress Leslie Jones became the target of widespread racist and sexist abuse on Twitter. What drew the wrath of the troll swarm? Ghostbusters was released, which she starred in. By August, Twitter introduced a quality filter, which claimed to filter your notifications of “low-quality content”. But did it help?
In 2017, Amnesty International organized 6,500 volunteers to analyze 228,000 tweets sent to 778 female politicians in the US and the UK, which was then used to train a machine learning algorithm to find abusive tweets on its own. It found that 1.1 million “abusive or problematic” tweets were sent to the 778 women in 2017, 7.1% of the total tweets received.
Incidents like these are commonplace on the platform, and Twitter can’t seem to make it stop. In March 2019, a female software developer who discovered an unannounced feature in Instagram received racist abuse, rape threats, and death threats for her trouble. (It seems the trolls hated the feature she discovered, and abused her even though she had nothing to do with implementing the feature.)
Reddit has trolls, but people are almost always safe from them
How does Reddit compare?
Reddit’s not perfect, it has a few dark corners rife with racism, sexism, all the colors of the troll rainbow. But generally speaking, you won’t encounter them unless you go looking for them. You’re not going to receive 50 rape threats per hour, even if you’re a raging feminist, or dare to be an Asian software developer.
If Twitter could claim the same, there’d be jubilation in San Francisco. It’s become such a problem for the company that Salesforce and Disney both backed off from bids for Twitter, in part because of the rampant abuse, harassment and bullying.
So how does Reddit pull this off?
Reddit’s clever community tricks
Reddit is divided into thousands of separate communities called subreddits. For example, there’s a subreddit called /r/femenism. There’s also a subreddit called /r/mensrights. People can choose which subreddits to subscribe to, so /r/feminism is unsurprisingly filled with feminists saying feminist things, and /r/mensrights is filled with, well, kind of the opposite.
On Twitter, these people are all squeezed together in a single, monstrous network. Sure, there’s a natural clumping of like-minded individuals, but nothing like the official segmentation of Reddit.
But why don’t the sexists go the /r/feminism subreddit and abuse the people there? After all, it’s just a tap away. Or why don’t they go to a mainstream subreddit like /r/movies and hurl racist abuse at black actresses there?
Because there’s negative consequences to doing so. This is what would happen.
The all-powerful downvote
On Twitter, you can favorite a tweet or retweet it, indicating approval. If someone issues a rape threat, all you can do is report it, but it can take days or weeks to get the tweet removed, if it gets removed at all.
On Reddit, if something hateful gets posted (and it isn’t in a toxic subreddit) others will indicate disapproval by voting it down, like the opposite of the Twitter favorite button.
Reddit sorts posts and comments with an algorithm that strongly factors in upvotes and downvotes, so a hateful comment which receives a lot of downvotes will sink below other, less reviled content.
With enough downvotes, the algorithm will decide that this contribution is particularly unwelcome. It becomes hidden by default, only visible if someone chooses to show it with an explicit tap. The comment will be “downvoted into oblivion” as Reddit users like to call it. This is powerful. By denying the trolls what they want most - a platform - a huge incentive to be abusive is removed.
The shame of a downvoted comment
There’s another important effect of downvotes. Comments on Reddit have a visible point score based on the amount of upvotes and downvotes they receive. So if a comment has a hugely negative score, it’s clear to everyone that the community strongly disapproves of it. Even though the points have no functional effect beyond de-emphasizing and hiding the post, it’s an indicator that attracts scorn and causes shame.
People on Reddit also have a personal score, which is the sum of the scores of all their posts and comments. So if a user posts enough poorly-received content, this can be spotted easily, and they’re likely to be judged harshly or shunned by the community at large. The number (called the user’s “karma”) behaves like reputation in the real world.
It can be seen as a form of crowdsourced moderation. Because millions of people are constantly indicating their approval or disapproval of everything said, it’s an extremely effective way to keep hateful content off the platform, and the judgement comes virtually immediately.
Contrast this to Twitter, where a rape threat in a tweet might sit uncontested for a week or more. Although others can indicate their disapproval by replying negatively, the original tweet remains as visible as ever. Because only approval is shown, even hateful tweets seem acceptable, replete with little hearts and retweets. The immediate consequences of trolling on Twitter don’t disincentivize the behavior.
The rules and the moderators
Subreddits are communities that anyone can create. The creator has the ability to remove posts, appoint other moderators, and create rules for the community. Reddit has a small army of volunteer moderators vigilantly policing each community.
For matters where downvotes aren’t enough to keep things civil, if someone insists on troll-like behavior or if they break the rules, a moderator can ban them from the subreddit. The banned user can continue to contribute elsewhere on the site, but they can no longer contribute to subreddits they’ve been banned from. Sometimes a ban is temporary, sometimes permanent.
Because each community has moderators who are personally responsible for their little corner of the site, bans happen much more quickly than they do on Twitter. There’s even bots that automatically police content based on things like keywords and post formats - decisions which can be appealed against by personally messaging the moderators.
Moderators are no more than other users with a few more powers, and the ability to message them personally allows personal relationships to develop, like with a cop in a small town.
It’s not a perfect system, sometimes the moderators can be unfair, and sometimes downright totalitarian. But this is rare, and in cases like this, people can come together and form a competing community, with fairer moderation.
When moderators aren’t enough, there’s site-wide administrators, or admins, to turn to. They have usernames, they can be personally messaged and emailed, and they usually get back to you promptly. If you report serious misbehavior, they’ll tend to act in a few days, typically messaging you personally to keep you informed.
Reddit’s a massive, chaotic, anonymous network, but it’s a community
Even though Reddit is completely anonymous, its communal nature keeps it safe. And with around the same amount of users as Twitter, that’s no small feat. In comparison, Twitter’s response to harassment and abuse has been reactive and ineffectual.
With Reddit users constantly rating each other’s contributions, moderators vigilantly policing their communities, and admins keeping an eye out for serious misconduct, Reddit keep the trolls civil and the community human.