You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • joe_archer@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    It is probably the most telling demonstration of the terrible state of our current society, that one of the largest corporations on earth, which got where it is today by providing accurate information, is now happy to knowingly provide incorrect, and even dangerous information, in its own name, an not give a flying fuck about it.

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Wikipedia got where it is today by providing accurate information. Google results have always been full of inaccurate information. Sorting through the links for respectable sources just became second nature, then we learned to scroll past ads to start sorting through links. The real issue with misinformation from an AI is that people treat it like it should be some infallible Oracle - a point of view only half-discouraged by marketing with a few warnings about hallucinations. LLMs are amazing, they’re just not infallible. Just like you’d check a Wikipedia source if it seemed suspect, you shouldn’t trust LLM outputs uncritically. /shrug

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    They keep saying it’s impossible, when the truth is it’s just expensive.

    That’s why they wont do it.

    You could only train AI with good sources (scientific literature, not social media) and then pay experts to talk with the AI for long periods of time, giving feedback directly to the AI.

    Essentially, if you want a smart AI you need to send it to college, not drop it off at the mall unsupervised for 22 years and hope for the best when you pick it back up.

    • redfellow@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      The truth is, this is the perfect type of a comment that makes an LLM hallucinate. Sounds right, very confident, but completely full of bullshit. You can’t just throw money on every problem and get it solved fast. This is an inheret flaw that can only be solved by something else than a LLM and prompt voodoo.

      They will always spout nonsense. No way around it, for now. A probabilistic neural network has zero, will always have zero, and cannot have anything but zero concept of fact - only stastisically probable result for a given prompt.

      It’s a politician.

    • Zarxrax@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I’m addition to the other comment, I’ll add that just because you train the AI on good and correct sources of information, it still doesn’t necessarily mean that it will give you a correct answer all the time. It’s more likely, but not ensured.

      • RidcullyTheBrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Yes, thank you! I think this should be written in capitals somewhere so that people could understand it quicker. The answers are not wrong or right on purpose. LLMs don’t have any way of distinguishing between the two.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      No he’s right that it’s unsolved. Humans aren’t great at reliably knowing truth from fiction too. If you’ve ever been in a highly active comment section you’ll notice certain “hallucinations” developing, usually because someone came along and sounded confident and everyone just believed them.

      We don’t even know how to get full people to do this, so how does a fancy markov chain do it? It can’t. I don’t think you solve this problem without AGI, and that’s something AI evangelists don’t want to think about because then the conversation changes significantly. They’re in this for the hype bubble, not the ethical implications.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        We do know. It’s called critical thinking education. This is why we send people to college. Of course there are highly educated morons, but we are edging bets. This is why the dismantling or coopting of education is the first thing every single authoritarian does. It makes it easier to manipulate masses.

  • masquenox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Since when has feeding us misinformation been a problem for capitalist parasites like Pichai?

    Misinformation is literally the first line of defense for them.

  • Baggie@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    God I’m fucking sick of this loss leading speculative investment bullshit. It’s hit some bizarre zenith that has infected everybody in the tech world, but nobody has any actual intention of being practical in the making of money, or the functionality of the product. I feel like we should just can the whole damned thing and start again.

    • steventrouble@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Legitimately, yes. I say this as an ML-adjacent engineer. Neural networks need to be rewritten from the ground up with support for confidence intervals.

      The SAT added a “guessing penalty” to stop people from answering questions they didn’t know the answer to. That’s exactly what we need to do when training ML models.

    • systemglitch@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Huh. That made me stop and realize how long I’ve been around. Wikipedia still feels like a new addition to society to me, even though I’ve been using it for around 20 years now.

      And what you said, is something I’ve cautioned my daughter about, and first said that to her about ten years ago.

  • Mad_Punda.de@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    these hallucinations are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem”.

    Then what made you think it’s a good idea to include that in your product now?!