• 0 Posts
  • 1 Comment
Joined 1 year ago
cake
Cake day: October 15th, 2024

help-circle
  • I feel these kinds of protection, against suicidal language etc, will just lead to even further self-censorship to avoid triggering safeguards, similar to how terms like unalive gained traction.

    AI should be regulated, but forcing them to do corpospeech and be ‘safe’ even harder than they already do in order to protect vulnerable children is not the way. I don’t like that being the way any tech moves and is a part of why I’m on Lemmy in the first place.

    The character.ai case the article mentioned already was the ai failing to ‘pick up on’ (yes I know that is anthropomorphizing an algorithm) a euphemism for suicide. Filters would need to be ridiculous to catch every suicidal euphemism possible and lead to a tonne of false positives.