• millie@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    29 days ago

    I think when people think of the danger of AI, they think of something like Skynet or the Matrix. It either hijacks technology or builds it itself and destroys everything.

    But what seems much more likely, given what we’ve seen already, is corporations pushing AI that they know isn’t really capable of what they say it is and everyone going along with it because of money and technological ignorance.

    You can already see the warning signs. Cars that run pedestrians over, search engines that tell people to eat glue, customer support AI that have no idea what they’re talking about, endless fake reviews and articles. It’s already hurt people, but so far only on a small scale.

    But the profitablity of pushing AI early, especially if you’re just pumping and dumping a company for quarterly profits, is massive. The more that gets normalized, the greater the chance one of them gets put in charge of something important, or becomes a barrier to something important.

    That’s what’s scary about it. It isn’t AI itself, it’s AI as a vector for corporate recklessness.

    • coffeetest@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      28 days ago

      Calling LLMs, “AI” is one of the most genius marketing moves I have ever seen. It’s also the reason for the problems you mention.

      I am guessing that a lot of people are just thinking, “Well AI is just not that smart… yet! It will learn more and get smarter and then, ah ha! Skynet!” It is a fundamental misunderstanding of what LLMs are doing. It may be a partial emulation of intelligence. Like humans, it uses its prior memory and experiences (data) to guess what an answer to a new question would look like. But unlike human intelligence, it doesn’t have any idea what it is saying, actually means.