• PieMePlenty@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    2 months ago

    I don’t get why they’d be called hallucinations thought. What LM’s do is predict the next word(s). If it hasn’t trained on enough data sets, the prediction confidence will be low. Their whole output is a hallucination based on speculation. If they actually don’t know the next word order, they’ll start spewing nonsense. Though I guess that would only happen if they were forced to generate text indefinitely… at some point they’d cease making (human) sense.

    LMs aren’t smart, they don’t think, they’re not really AI. There aren’t errors, there aren’t hallucinations, this is by design.