• 6 Posts
  • 10 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle
  • Ok, final message because I’m tired of this:

    • you are openly admitting that you are going to piss on the well by adding a bot that pretends to be a human.
    • you are openly admitting that you are going to do this without providing any form of mitigation.
    • you are going to do this while pushing data to the whole network. No prior testing in a test instance, not even using your own instance for it.
    • you think that is fine to leave the onus of “detecting” the bot to the others.

    You are a complete idiot.








  • To implement counter AI measures, best way to counter AI is to implement it.

    You are jumping into this conclusion with no real indication that it’s actually true. The best we get for any type of arms race is a forced stalemate due to Mutually Assured Destruction. With AI/“Counter” AI, you are bringing a medicine that is worse than the disease.

    Feel free to go ahead, though. The more polluted you make this environment, the more people will realize that it is not sustainable unless we started charging from everyone and/or adopt a very strict Web of Trust.



  • What I am failing to understand is: why?

    Is this just for some petty motivation, like “proving” that people can not easily detect the difference between text from an LLM vs text an actual person? If that is the case, can’t you just spare all this work and look at all the extensive studies that measure exactly this?

    Or perhaps it is something more practical, and you’ve already built something that you think is useful and it would require lots of LLM bots to work?

    Or is it that you fancy yourself too smart for the rest of us, and you will feel superior by having something that can show us for fools for thinking we can discern LLMs from “organic” content?