Last week I found myself on the receiving end of a bizarre Twitter meltdown. I had just written a little essay about different ways Americans can frame the “fake news” problem, and one section included a brief reference to Eric Garland, a prolific tweeter and peddler of conspiracy theories. Despite the fact that my piece wasn’t even really about him, he unleashed a series of late-night tweets, lambasting me for journalistic malfeasance and suggesting that I was part of some vast Russian conspiracy.
Lemme just ask this again to “Fast Company:” Grand children of Soviet intelligence agents were intimidating my family throughout the year.
You employ their fans.
Care to explain? Or should I?
PS. Research already done.
— Eric Garland (@ericgarland) December 24, 2017
All in all, this was not a terrible way to spend the Saturday before Christmas: sifting through hundreds of Twitter mentions every minute, trying to understand who said what and what exactly was happening. But I realized midway through the debacle that Twitter’s tools for dealing with this were inadequate at best. I received a few vaguely threatening tweets (nothing too terrible) and some weird posts from Garland about my mother, which were essentially blatant harassment.
I muted dozens of threads started by Garland and his cadre of followers, only to find numerous others pop up like Medusa’s hair. The next 24 hours were an endless game of whack-a-mole as I tried to quiet my notifications. Even days later, I’m still getting a few every hour.
The Online Mob Effect
If my own experience weren’t enough proof of how Twitter’s anti-harassment tools are falling woefully short, two more recent incidents may have clinched it. First, Vanity Fair published a video on Twitter–part of a series aimed at many powerful figures–where the magazine’s staff gave ideas for Hillary Clinton’s New Year’s resolutions. The cheeky video included a number of humorous ribs, and one writer joked that perhaps Clinton should take up a new hobby, like knitting or improv comedy. This one fleeting frame in the online video catalyzed a Twitter firestorm.
The writer who made the knitting comment, Maya Kosoff, has been the target of incessant online harassment for over 24 hours. (Disclosure: Kosoff is a friend and former colleague of mine; she has had a very rough week online.) The harassment began with high-profile Clinton fans calling Kosoff’s knitting suggestion sexist. Then it became an online pile-on with some of the most-followed Clinton sycophants, and their followers, bullying the writer, body-shaming her, and calling for her firing.
People are, of course, allowed to be annoyed by the video. And they can take to Twitter, or any other platform, to say they didn’t like it. But that’s not what’s happening here. Instead, a person who appeared in this silly video (surely produced to fill the holiday content void) has become a sacrificial lamb for a certain group of centrists. And despite the fact that many of these tweets violate Twitter’s rules for conduct, there is no adequate mechanism to stop the barrage.
The outrage machine has been fired up. But it’s truly disturbing to see such a furious groupthink attack from people who claim to be progressive aimed at something so innocuous, from a publication that is itself progressive. Meanwhile, Kosoff, unable to stop the relentless badgering, has set her Twitter profile to private, which means only people following her can see her tweets. She continues to be brutally trolled online.
Twitter has said it’s trying to bulk up its anti-harassment tools and even revamped its code of conduct. Recently, it began banning known white nationalist accounts, and it promised to make the process of reporting abusive tweets easier. Still, the only thing people in Kosoff’s position can do now is sit and wait until the fire dies down.
Fixes From The Public
That brings us to our second incident: It turns out, people have been trying to combat Twitter’s harassment problem on their own. Yesterday, the New York Times published an article by journalist Yair Rosenberg that described a group of Twitter users who built a bot to unmask a kind of harasser known as an “impersonator troll.” These accounts try to act like a certain targeted group–Jewish, black, women, etc–and then enter into heated conversations. They then say offensive things to discredit those advocating for the group.
“In this manner, unsuspecting readers glancing through their feed are given the impression that someone who looks like, say, a religious Jew or Muslim is outlandishly bigoted,” writes Rosenberg. “Thus, an entire community is defamed.”
Rosenberg and a group of tech-savvy people built a Twitter bot that was able to successfully unmask these trolling accounts. But Twitter ultimately decided to ban the bot. Why? Because the far right/nazi accounts Rosenberg’s bot was targeting reported the account. And Twitter ultimately sided with these nazis.
These two example alone show how inadequate Twitter’s attempt to reckon with its harassment problem is. The tools in place to protect those being attacked are only slightly helpful, and when someone built an out-of-the-box solution to make the platform better, Twitter ultimately killed it. As the Vanity Fair incident indicates, the problem only seems to be getting worse and, I can only imagine that users are getting tired.
So let’s hope that Twitter, as a company, makes a New Year’s resolution to further ramp up its anti-harrassment efforts: It can do that by listening both to those impacted and those trying to build more robust anti-abuse tools. Either that, or maybe we should hope for a new and better platform to come forward and take its place.