Being polite to your AI chatbot could actually be making it worse at answering your questions according to a new study.

  • Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    1 day ago

    Being ethical in real life doesn’t always give you the best outcome either - doesn’t mean you still shouldn’t be so.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      1 day ago

      Being ethical to H100 clusters running inferences on models that were created with stolen, pirated, and appropriated data does not matter. Abuse the shit out of them. It’s absolutely meaningless.

      Or just don’t use them, if you care at all about economic, ecological, and societal stability.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        1 day ago

        You’re right that an LLM doesn’t care how it’s treated - it’s not conscious. But that’s not really the point. The way people treat things that seem human still says something about them, not the thing. If someone goes out of their way to be cruel to a chatbot that’s just trying to be helpful, it’s not the bot that’s being tested - it’s the person’s capacity for empathy and restraint.

        It’s the same instinct behind how we treat animals, or even how kids treat toys - being kind to something that can’t fight back is part of what keeps us human. And historically, the “it’s not really human, so it doesn’t matter” argument has been used to justify a lot of awful behavior.

        So no, the AI doesn’t care. But maybe it still matters that we do.