

See, so now you are back to saying to your plan is to make a shitty thing and put the burden on those against it to come up with countermeasures. That’s just lame.
See, so now you are back to saying to your plan is to make a shitty thing and put the burden on those against it to come up with countermeasures. That’s just lame.
You were implying not just that you wanted to detect bots, but that you wanted to to write your own set of bots that would pretend to be humans.
If your plan is only to write detection of bots, it’s a whole different thing.
There is a big difference if your bot provides functionality that is good for the community and a bot that does only things that interests you.
People are asking you not to do this. So, if you want to do it, do it on your resources. I’m saying that as someone who set up almost 20 different instances (alien.top + the topic specific instances) just to have a place to run the mirroring bots.
You want to write software that subverts the expectations of users (who are coming here with the expectation they will be chatting with other people) and abusing resources provided by others who did not ask you to help you with any sort of LLM detection.
You don’t do tests in an actual production environment. It is unethical and irresponsible.
Feel free to do your experiments on your servers, with people who are aware that they are being subject to some type of experiment. Anything else and I will make sure to get as many admins as possible to ban you and your bots from the federation.
I completely understood your analogy, and I certainly understand the fun in tinkering with technology. What you might be missing is that it seems that OP is planning to write a bunch of bots in here and then test how well people can detect them, and that affects other people.
To implement counter AI measures, best way to counter AI is to implement it.
You are jumping into this conclusion with no real indication that it’s actually true. The best we get for any type of arms race is a forced stalemate due to Mutually Assured Destruction. With AI/“Counter” AI, you are bringing a medicine that is worse than the disease.
Feel free to go ahead, though. The more polluted you make this environment, the more people will realize that it is not sustainable unless we started charging from everyone and/or adopt a very strict Web of Trust.
People do things for fun sometimes.
This is not the same as playing basketball. Unleashing AI bots “just for the fun of it” ends up effectively poisoning the well.
What I am failing to understand is: why?
Is this just for some petty motivation, like “proving” that people can not easily detect the difference between text from an LLM vs text an actual person? If that is the case, can’t you just spare all this work and look at all the extensive studies that measure exactly this?
Or perhaps it is something more practical, and you’ve already built something that you think is useful and it would require lots of LLM bots to work?
Or is it that you fancy yourself too smart for the rest of us, and you will feel superior by having something that can show us for fools for thinking we can discern LLMs from “organic” content?
Ok, final message because I’m tired of this:
You are a complete idiot.