Small rant : Basically, the title. Instead of answering every question, if it instead said it doesn’t know the answer, it would have been trustworthy.

  • mozz@mbin.grits.dev
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    Yeah. It is fairly weird to me that it’s such a common thing to do to take the raw output of the LLM and send that to the user, and to try use fine-tuning to get that raw output to look some way that you want.

    To me it is obvious that something like having the LLM emit a little JSON block which includes some field which covers “how sure are you that this is actually true” or something, is more flexible and simpler and cheaper and works better.

    But what do I know

    • Cosmicomical@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 days ago

      Good look getting it to reply consistently with a json object

      Edit: maybe i’m shit at prompting but for me it’s almost impossible to even get it to just shut up and consistently reply yes or no to my questions