I’m preparing a presentation on how to implement an automated moderation of content on social media. I wanted to talk a bit on how this is done by small forums and Fediverse instances came as an obvious focus of study for me. Is it all done by hand by human moderators, or are there any tools that can filter out the obvious violations of instance’s rules? I’m thinking mostly about images, are those filtered for nudity/violence?

  • 9point6@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    4 days ago

    I believe there’s an automated filter used by some of the biggest instances to detect and nuke child abuse stuff before a human has to see it

    Everything else is human moderation though AFAIK