

3·
2 months agoTook a lot of scrolling to find an intelligent comment on the article about how outputting words isn’t necessarily intelligence.
Appreciate you doing the good work I’m too exhausted with Lemmy to do.
(And for those that want more research in line with what the user above is talking about, I strongly encourage checking out the Othello-GPT line of research and replication, starting with this write-up from the original study authors here.)
The AI also has the tendency inherited from the broad human tendency in training.
So you get overconfident human + overconfident AI which leads to a feedback loop that lands even more confident in BS than a human alone.
AI can routinely be confidently incorrect. Especially people who don’t realize this and don’t question outputs when it aligns with their confirmation biases end up misled.