cross-posted from: https://lemmings.world/post/21993947

Since I suggested that I’m willing to hook my computer to an LLM model and to a mastodon account, I’ve gotten vocal anti AI sentiments. Im wondering if fediverse has made a plug in to find bots larping as people, as of now I haven’t made the bot and I won’t disclose when I do make the bot.

  • hisao@ani.social
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    What I would expect to happen is: their posts quickly start getting many downvotes and comments saying they sound like an AI bot. This, in turn, will make it easy for others to notice and block them individually. Other than that, I’ve never heard of automated solutions to detect LLM posting.

    • PixelPilgrim@lemmings.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      19
      ·
      1 day ago

      Ahhhhh I doubt average Lemmy users are smart enough to detect LLM content. I already thought of a few ways to find LLM bots

      • Docus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        The further get down this thread, the more you sound like a person I don’t want to deal with. And looking at the downvotes, I’m not the only one.

        If you want people blocking you, perhaps followed by communities and instances blocking you as well, carry on.

        • PixelPilgrim@lemmings.worldOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          18 hours ago

          That’s fine if people don’t want to deal with me I never interacted with them before this thread (most likely)

      • hisao@ani.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 day ago

        Imo their style of writing is very noticeable. You can obcure that by prompting LLM to deliberately change that, but I think it’s still often noticeable, not only specific wordings, but higher-level structure of replies as well. At least, that’s always been the case for me with ChatGPT. Don’t have much experience with other models.

        • Docus@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 hours ago

          That’s not entirely true. University assignments are scanned for signs of LLM use, and even with several thousand words per assignment, a not insignificant proportion comes back with an ‘undecided’ verdict.

          • hisao@ani.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 hours ago

            With human post-processing it’s definitely more complicated. Bots usually post fully automatic content, without human supervision and editing.