• Smorty [she/her]@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      21 hours ago

      ?

      we didn’t see him doing the hand move before many models started training, so it doesn’t have the background.

      LLMs can do cool stuff, it’s just being used in awful and boring ways by BigEvilCo™️.

        • Smorty [she/her]@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          0
          ·
          17 minutes ago

          FTFY

          what does that mean? i said that we didn’t see him doing the move before many models were finished training. so these models literally cannot know that this happened.

          • Norah (pup/it/she)@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            13 minutes ago

            Are you incapable of using a search engine? It’s the top result. “Fixed That For You”. What I was calling out is you calling it a “hand movement” which is a right-wing dogwhistle, and one you’ve repeated again in your follow up comment to me. It was very clearly a nazi salute. He did it twice. Call it what it was.

      • T00l_shed@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        19 hours ago

        It’s really bad for the environment, it’s also trained on stuff that it shouldn’t be such as copy writed material.

      • _stranger_@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        21 hours ago

        Consider this:

        The LLM is a giant dark box of logic no one really understands.

        This response is obvious and blatantly censored in some way. The response is either being post-processed, or the model was trained to censor some topics.

        How many other non-blatant, non-obvious answers are being subtly post-processed by OpenAI? Subtly censored, or trained, to benefit one (paying?) party over another.

        The more people start to trust AI’s, the less trustworthy they become.

        • RandomVideos@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          18 hours ago

          I think its made to not give any political answers. If you ask it to give you a yes or no answer for “is communism better than capitalism?”, it will say “it depends”

        • Smorty [she/her]@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          0
          ·
          21 hours ago

          that’s why u gotta not use some companies offering!

          yes, centralized AI bad, no shid.

          PLENTY good uncensored models on huggingface.

          recently Dolphin 3 looks interesting.

          • MeatsOfRage@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            20 hours ago

            Exactly. This is the result of human interference, AI inherently doesn’t have this level of censorship built in, they have to be censored after the fact. Like imagine a Lemmy admin went nuts and started censoring everything on their instance and your response is all fediverse is bad despite having the ability to host it yourself without that admin control (like AI).

            AI definitely has issues but don’t make it a scapegoat when we should be calling out there people who are actively working in nefarious ways.

        • Smorty [she/her]@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          0
          ·
          3 hours ago

          fair, if u wanna see it that way, ai is bad… just like many other technologies which are being used to do bad stuffs.

          yes, ai used for bad is bad. yes, guns used for bad is bad. yes, computers used for bad - is bad.

          guns are specifically made to hurt people and kill them, so that’s kinda a different thing, but ai is not like this. it was not made to kill or hurt people. currently, it is made to “assist the user”. And if the owners of the LLMs (large language models) are pro-elon, they might train in the idea that he is okay actually.

          but we can do that too! many people finetune open models to respond in “uncensored” ways. So that there is no gate between what it can and can’t say.