Being polite to your AI chatbot could actually be making it worse at answering your questions according to a new study.

  • hotdogcharmer@lemmy.zip
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    2
    ·
    1 day ago

    Do not engage with chatbots at all. Let this LLM psychosis end. That’s the best result for humanity.

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      20 hours ago

      Ahhh, yes, the “stick your head in the sand until it blows over” strategy. Because that’s always worked, right?

      • hotdogcharmer@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        ·
        12 hours ago

        I’m saying don’t use unethical tech. If you want to interpret that as sticking your head in the sand, you can, but I think you’re being obtuse.

        Unless you’re advocating that we go and burn down the infrastructure that runs these LLMs - is that what you’re saying?

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          51 minutes ago

          It’s foolish to think this will just blow over, and the tech will magically disappear, no matter how you think about its ethics.

          It’s better to take control of the technology directly, promote open-source models, push local usage, use it as a tool for the people, not as a tool for corporations. Use the tech against the elites. Show how fragile their position is.

          If you don’t take control of the situation, the world will take control of it for you.

  • Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    1 day ago

    Being ethical in real life doesn’t always give you the best outcome either - doesn’t mean you still shouldn’t be so.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      1 day ago

      Being ethical to H100 clusters running inferences on models that were created with stolen, pirated, and appropriated data does not matter. Abuse the shit out of them. It’s absolutely meaningless.

      Or just don’t use them, if you care at all about economic, ecological, and societal stability.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        1 day ago

        You’re right that an LLM doesn’t care how it’s treated - it’s not conscious. But that’s not really the point. The way people treat things that seem human still says something about them, not the thing. If someone goes out of their way to be cruel to a chatbot that’s just trying to be helpful, it’s not the bot that’s being tested - it’s the person’s capacity for empathy and restraint.

        It’s the same instinct behind how we treat animals, or even how kids treat toys - being kind to something that can’t fight back is part of what keeps us human. And historically, the “it’s not really human, so it doesn’t matter” argument has been used to justify a lot of awful behavior.

        So no, the AI doesn’t care. But maybe it still matters that we do.

  • MTZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    I’m always very polite to them so that when some Skynet type shit eventually goes down, perhaps they will remember me as the polite person and spare me. Just in case.

  • Sidhean@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 day ago

    Is anyone surprised by this? That’s just how people on the internet are, too. j Everyone knows that the best way to get an answer on a forum is to be insulting in the post title /j

  • rozodru@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    I found you get more specific solutions the worse you treat it. none of the “it’s a known issue” or problem confirmations or any of that. you just call it an idiot, moron, use all caps, cuss at it a few times.

    The funny thing is, at least with Claude, is that it will mimic your language and start cussing back. ChatGPT on the other hand will just start insulting you back.