More Content Moderation Is Not Always Better 

Content moderation is eating the world. Platforms’ rule-sets are exploding, their services are peppered with labels, and tens of thousands of users are given the boot in regular fell swoops. No platform is immune from demands that it step in and impose guardrails on user-generated content. This trend is not new, but the unique circumstances of a global public health emergency and the pressure around the US 2020 election put it into overdrive. Now, as parts of the world start to emerge from the pandemic, and the internet’s troll in chief is relegated to a little-visited blog, the question is whether the past year has been the start of the tumble down the dreaded slippery content moderation slope or a state of exception that will come to an end.

There will surely never be a return to the old days, when platforms such as Facebook and Twitter tried to wash their hands of the bulk of what happened on their sites with faith that internet users, as a global community, would magically govern themselves. But a slow and steady march toward a future where ever more problems are sought to be addressed by trying to erase content from the face of the internet is also a simplistic and ineffective approach to complicated issues. The internet is sitting at a crossroads, and it’s worth being thoughtful about the path we choose for it. More content moderation isn’t always better moderation, and there are trade-offs at every step. Maybe those trade-offs are worth it, but ignoring them doesn’t mean they don’t exist.

A look at how we got here shows why the solutions to social media’s problems aren’t as obvious as they might seem. Misinformation about the pandemic was supposed to be the easy case. In response to the global emergency, the platforms were finally moving fast and cracking down on Covid-19 misinformation in a way that they never had before. As a result, there was about a week in March 2020 when social media platforms, battered by almost unrelenting criticism for the last four years, were good again. “Who knew the techlash was susceptible to a virus?” Steven Levy asked.

Such was the enthusiasm for these actions that there were immediately calls for them to do the same thing all the time for all misinformation—not just medical. Initially, platforms insisted that Covid misinformation was different. The likelihood of harm arising from it was higher, they argued. Plus, there were clear authorities they could point to, like the World Health Organization, that could tell them what was right and wrong.

printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston
printingcenterhouston

But the line did not hold for long. Platforms have only continued to impose more and more guardrails on what people can say on their services. They stuck labels all over the place during the US 2020 election. They stepped in with unusual swiftness to downrank or block a story from a major media outlet, the New York Post, about Hunter Biden. They deplatformed Holocaust deniers, QAnon believers, and, eventually, the sitting President of the United States himself.

For many, all of this still has not been enough. Calls for platforms to do better and take more content down remain strong and steady. Lawmakers around the world certainly have not decreased their pressure campaigns. There’s hardly a country in the world right now that’s not making moves to regulate social media in one form or another. Just last week, the European Union beefed up its Code of Practice on Disinformation, saying that a stronger code is necessary because “threats posed by disinformation online are fast evolving” and that the ongoing infodemic is “putting people’s life in danger.” US Senators still write to platforms asking them to take down specific profiles. Platforms continue to roll out new rules aimed at limiting the spread of misinformation.

As companies develop ever more types of technology to find and remove content in different ways, there becomes an expectation they should use it. Can moderate implies ought to moderate. After all, once a tool has been put into use, it’s hard to put it back in the box. But content moderation is now snowballing, and the collateral damage in its path is too often ignored.

Leave a Reply

Your email address will not be published. Required fields are marked *