Being polite to your AI chatbot could actually be making it worse at answering your questions according to a new study.
Do not engage with chatbots at all. Let this LLM psychosis end. That’s the best result for humanity.
Ahhh, yes, the “stick your head in the sand until it blows over” strategy. Because that’s always worked, right?
I’m saying don’t use unethical tech. If you want to interpret that as sticking your head in the sand, you can, but I think you’re being obtuse.
Unless you’re advocating that we go and burn down the infrastructure that runs these LLMs - is that what you’re saying?
It’s foolish to think this will just blow over, and the tech will magically disappear, no matter how you think about its ethics.
It’s better to take control of the technology directly, promote open-source models, push local usage, use it as a tool for the people, not as a tool for corporations. Use the tech against the elites. Show how fragile their position is.
If you don’t take control of the situation, the world will take control of it for you.
Greeeat… So now the insufferable jackasses asking a dumbshit AI for things will not only be wrong 70℅ of the time, they will be reinforced to also be assholes 100% of the time.
I’m very polite so that the basilisk spares me
Roko, if you see this, I’m doing my part!
I’m surprised the best strategy wasn’t the neutral prompt, due to removal of any fluff.
Being ethical in real life doesn’t always give you the best outcome either - doesn’t mean you still shouldn’t be so.
Being ethical to H100 clusters running inferences on models that were created with stolen, pirated, and appropriated data does not matter. Abuse the shit out of them. It’s absolutely meaningless.
Or just don’t use them, if you care at all about economic, ecological, and societal stability.
You’re right that an LLM doesn’t care how it’s treated - it’s not conscious. But that’s not really the point. The way people treat things that seem human still says something about them, not the thing. If someone goes out of their way to be cruel to a chatbot that’s just trying to be helpful, it’s not the bot that’s being tested - it’s the person’s capacity for empathy and restraint.
It’s the same instinct behind how we treat animals, or even how kids treat toys - being kind to something that can’t fight back is part of what keeps us human. And historically, the “it’s not really human, so it doesn’t matter” argument has been used to justify a lot of awful behavior.
So no, the AI doesn’t care. But maybe it still matters that we do.
I’m always very polite to them so that when some Skynet type shit eventually goes down, perhaps they will remember me as the polite person and spare me. Just in case.
Is anyone surprised by this? That’s just how people on the internet are, too. j Everyone knows that the best way to get an answer on a forum is to be insulting in the post title /j
I found you get more specific solutions the worse you treat it. none of the “it’s a known issue” or problem confirmations or any of that. you just call it an idiot, moron, use all caps, cuss at it a few times.
The funny thing is, at least with Claude, is that it will mimic your language and start cussing back. ChatGPT on the other hand will just start insulting you back.
This is what will trigger the robot apocalypse