• Ultraviolet@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 days ago

    “Hallucination” is an anthropomorphized term for what’s happening. The actual cause is much simpler, there’s no semantic distinction between true and false statements. Both are equally plausible as far as a language model is concerned, as long as it’s semantically structured like an answer to the question being asked.

    • htrayl@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      That’s also pretty true for people, unfortunately. People are deeply incapable of differentiating fact from fiction.

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        No that’s not it at all. People know that they don’t know some things. LLMs do not.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          17 days ago

          Exactly, the LLM isn’t “thinking,” it’s just matching inputs to outputs with some randomness thrown in. If your data is high quality, a lot of the time the answers will be appropriate given the inputs. If your data is poor, it’ll output surprising things more often.

          It’s a really cool technology in how much we get for how little effort we put in, but it’s not “thinking” in any sense of the word. If you want it to “think,” you’ll need to put in a lot more effort.

          • ricdeh@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            17 days ago

            Your brain is also “just” matching inputs to outputs using complex statistics, a huge number of interconnects and clever digital-analog mixed ionic circuitry.