Twitter’s moderation system is in tatters

“Me and other people who tried to get in touch hit dead ends,” Benavidez says. “And when we’ve reached out to those who are presumably still on Twitter, we just don’t get a response.”

Even when researchers do get through to Twitter, responses are slow, sometimes taking more than a day. Jesse Littlewood, vice president of campaigns at the nonprofit Common Cause, says he’s noticed that when his organization reports tweets that clearly violate Twitter policies, those posts are less likely to be removed.

The volume of content users and watchdogs may want to report to Twitter is likely to increase. Many of the staff and contractors fired in recent weeks have worked in teams like trust and safety, politics and civic integrity, all of which have worked to keep disinformation and hate speech off the platform.

Melissa Ingle was a senior data scientist on Twitter’s civic integrity team until she was fired along with 4,400 other contractors on Nov. 12. She has written and monitored algorithms used to detect and remove political misinformation on Twitter, most recently this meant elections in the US and Brazil. Of the 30 people on her team, only 10 remain, and many of the human content moderators, who review tweets and flag those that violate Twitter policies, have also been fired. “Machine learning requires constant input, constant nurturing,” she says. “We have to constantly update what we’re looking for because political discourse changes all the time.”

While Ingle’s job didn’t involve interacting with outside activists or researchers, she says members of Twitter’s policy team did. Sometimes, information from outside groups helped inform the terms or content that Ingle and his team trained algorithms to identify. He now fears that with so many staff and contractors being laid off, there won’t be enough people to ensure the software remains accurate.

“With the algorithm no longer being updated and the human moderators gone, there aren’t enough people to run the ship,” Ingle says. “My concern is that these filters get more and more porous and more and more stuff comes out as the algorithms get less accurate over time. And there is no human being who can catch the things going through the cracks.

Within a day of Musk taking ownership of Twitter, Ingle says, internal data showed that the number of abusive tweets reported by users had increased by 50%. That initial spike has eased slightly, he says, but reports of offensive content remained about 40 percent higher than normal volume before the acquisition.

Rebekah Tromble, director of the Institute for Data, Democracy & Politics at George Washington University, also expects to see Twitter’s defenses against banned content wither. “Twitter has always struggled with this, but a number of talented teams have made real progress on these issues in recent months. Those teams have now been wiped out.”

Those concerns are echoed by a former content moderator who was a Twitter contractor until 2020. The contractor, speaking anonymously to avoid repercussions from his current employer, says all former colleagues who did similar work with whom he was in contact were fired. He expects the platform to become a much less nice place to be. “It’s going to be awful,” he says. “I’ve been actively searching for the worst parts of Twitter, the most racist, most awful, most degenerate parts of the platform. This is what will be amplified.

Leave a Reply

Your email address will not be published. Required fields are marked *