Small rant : Basically, the title. Instead of answering every question, if it instead said it doesn’t know the answer, it would have been trustworthy.

  • mozz@mbin.grits.dev
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    This wasn’t an intentional feature; they’re actually trying to train it with fine-tuning to add this as an ability. It’s one area that highlights the difference between it imitating the text it’s been seeing, instead of actually understanding what it’s saying – since most of its training data is of the form “(ask a question) (response to question)” overwhelmingly more often than “(ask a question) (say you don’t know, the end)”, it is trying to be a good imitator and do the same, and come up with some plausible nonsense even if it doesn’t know the answer.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Part of the problem is fine tuning is very shallow, and that a contributing issue for claiming to be right when it isn’t is the pretraining on a bunch of training data of people online claiming to be right when they aren’t.

      • mozz@mbin.grits.dev
        link
        fedilink
        arrow-up
        0
        ·
        2 days ago

        Yeah. It is fairly weird to me that it’s such a common thing to do to take the raw output of the LLM and send that to the user, and to try use fine-tuning to get that raw output to look some way that you want.

        To me it is obvious that something like having the LLM emit a little JSON block which includes some field which covers “how sure are you that this is actually true” or something, is more flexible and simpler and cheaper and works better.

        But what do I know

        • Cosmicomical@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          11 hours ago

          Good look getting it to reply consistently with a json object

          Edit: maybe i’m shit at prompting but for me it’s almost impossible to even get it to just shut up and consistently reply yes or no to my questions

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      And sometimes that’s exactly what I want, too. I use LLMs like ChatGPT when brainstorming and fleshing out fictional scenarios for tabletop roleplaying games, for example, and in those situations coming up with plausible nonsense is specifically the job at hand. I wouldn’t want to go “ChatGPT, I need a description of the interior of a wizard’s tower is like” and get the response “I don’t know what the interior of a wizard’s tower is like.”

      • mozz@mbin.grits.dev
        link
        fedilink
        arrow-up
        0
        ·
        2 days ago

        At one point I messed around with a lore generator that would chop up sections of “The Dungeon Alphabet” and “Fire on the Velvet Horizon” along with some other stuff, and feed random sections of them into the LLM for inspiration and then ask it to lay out a little map, and it pretty reliably came up with all kind of badass stuff.