A new study shows most AI chatbots would help users plan a violent attack.
From one side, LLMs have a tendency of agreeing with the user so good to point that out.
From another side, LLMs have a tendency of agreeing with the user so it can easily be exploited for making the data the researcher wants.
From yet another side, it doesn’t seem to be lobbying to add laws specifically for it, which is good as bloating laws is a problem by itself.
But from a final side, this type of news, specially for the headline readers, could be the setting up to push for such laws.
To me, that means that they aren’t being built with the proper internal-hierarchy: no executive function that DECIDES whether to do something: it’s ALL situational-pressure/peer-pressure/social-context, to them, which … now becomes-obvious … is a mental-disability.
Interesting…
_ /\ _




