3 min read

Content moderation has no easy answers

This morning I read Casey Newton's expose of Facebook moderation problems at the Verge.

Let me be clear upfront: content moderation is tough and I have no idea how to solve it at internet scale—in fact I'm not even sure it's possible to do on the orders of millions and billions of items to be reviewed. Stories like this started coming out about 5 years ago about facebook moderators in the Philippines having high burnout rates and I remember thinking the problem had no easy solution back then (hint: it's even worse now).

I ran a somewhat popular indie site for 15 years, the last half or so with ample moderation. But to put the scale of the work in perspective, we were dealing with 10-15 thousand active people daily posting about 3,000 things. Slightly big numbers but still small enough you can wrap your head around them. Mostly day to day we broke up bickering matches between two grad students on the site. And even that was still a drag and after many years doing it I had to hang it up to take a break from the day to day stress.

People often say to me that Twitter or Facebook should be more like MetaFilter, but there's no way the numbers work out. We had 6 people combing through hundreds of reported postings each day. On a scale many orders of magnitude larger, you can't employ enough moderators to make sure everything gets a check. You can work off just reported stuff and that cuts down your workload, but it's still a deluge when you're talking about millions of things per day. How many moderators could even work at Google? Ten thousand? A hundred thousand? A million?

YouTube itself presents a special problem with no easy solution. Every minute of every day, hundreds of hours of video are uploaded to the service. That's physically impossible for humans to watch it even if you had thousands of content mods working for YT full time around the world.

So everyone says "I guess AI will solve it" but then you have all of AI's problems on top of it. Baby videos get flagged as porn because there's too much skin tone filling the screen. Subtle forms of abuse aren't picked up because the patterns don't exist yet in the AI and every day is a cat-and-mouse game to stay head of AI. AI is prone to the same biases in the creators and will have negative effects down the line.

I don't know how to counteract the effects of moderation, or how to mitigate the toll it takes on people. I know this from friends working all over the tech industry, but any job that requires you to solve problems for people and express empathy for them, whether that's in a chat window or on phone support or at a genius bar, it all takes its toll on people doing it and those jobs have high turnover rates. Many content items described in Casey's piece are horrific and I don't know how to you prevent it from harming employees, but aside from those special cases it's extremely hard to keep the work from grinding people down.

Honestly, I wish there was a solution. I'd love to see Twitter do a better job keeping terrible people off their platform and stopping things like brigading where you make a joke about a public figure and then thousands of people hound you from some unknown source. I wish YouTube would get better at filtering out conspiracy nonsense and stop radicalizing people. I wish Facebook could keep their site free of brutality without permanently harming workers who have to look at it.

I was part of a small corner of the internet where we made it work, but it was downright tiny compared to the big internet scale platforms. That's not to say it's impossible so we should throw up our hands and give up, but I just want to acknowledge how hard the problem is to solve. I've thought about these issues for decades but there are no easy answers. I don't let any large platform off the hook for what takes place there, but I do recognize there's no magic solution.

Subscribe to the blog

Become a subscriber receive the latest updates in your inbox.