I’m preparing a presentation on how to implement an automated moderation of content on social media. I wanted to talk a bit on how this is done by small forums and Fediverse instances came as an obvious focus of study for me. Is it all done by hand by human moderators, or are there any tools that can filter out the obvious violations of instance’s rules? I’m thinking mostly about images, are those filtered for nudity/violence?
There are some, but they aren’t used very often. I know some instances scan CP and other illegal shit using community tools. But automods? No, I don’t think I’ve ever seen one actively operating (and I don’t think they are even maintained anymore, or at least the ones I know).
And even if they were, the advanced ones are only useful for the host.
So everything is manually moderated by humans at the moment.
That’s why I’m working on a bot with a plugin system. The plugin system will allow users of the bot to implement the logic themselves in one of the supported languages (e.g. Python, JavaScript, Rust) in a sandboxed environment (Wasm). It’s halfway done, but now I’m unsure if I want to create a dedicated Fediverse platform for it. One of the biggest reasons for this is to be free from the limitations of Lemmy and other factors.
The problems with relying on Lemmy right now are:
lemmy-client
crate from the Lemmy devs. Should this ever be discontinued, I would either have to maintain it myself, use another crate, or create one myself. The last two would be painful and require a lot of work.And so I’ve been researching how I could build a lightweight Fediverse platform specifically for the bot. That would eliminate all the problems mentioned above. Since the platform would be in my hands, I could also implement ways to federate with other platforms and even use the unique features of other platforms. But that’s not easy, so it will take some time. I also am not great at web dev, so the frontend will also be a problem.