• prosp3kt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    I kindly ask for a replacement. Being honest GPT4o is great for redaction, coding, translation, and some other things. It sucks if you don’t have a good technical background. I get this Snowden, but there isn’t replacement…

    • nao@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      Being able to read replies on twitter reminded me why it doesn’t matter if you can’t read them

      • doodle967@lemdro.idOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        17 days ago

        Yes, the answers are garbage most of the time, but floods are sometimes necessary.

      • macniel@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        17 days ago

        I find this behaviour super strange. Why can’t I see replies on a twitter post when I’m on a smartphone regardless of desktop or mobile version of that page.

        • TheSlad@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          17 days ago

          Are you logged in? Elon took away lurkers’ ability to view replies or look at profiles a while ago. Without an account all you can do is see tweets that you were given a direct link for.

  • 乇ㄥ乇¢ㄒ尺ㄖ@infosec.pub
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    I never did, i think we need to start seeing the bigger picture, there’s this global aganda at play here with big tech and various EU governments and the WEF, it’s like their movement now, and they’re on the same page… the other day I was holding a coworkers Samsung phone and saw a basically useless app installed, it’s name is Global causes I think…

  • TheOubliette@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    Any online service into which you enter information has the capability to save that information for its own purposes. This includes all the people entering personal or identifying or really any information into “AI” products.

    Given that it’s not even particularly useful, I recommend just not using “AI” if you’re not sure how to protect yourself.

  • archchan@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    Amazon had also appointed a former NSA director. Actually, it was Snowden’s director.

  • jarfil@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    Snowden is wrong though, there are two reasons:

    1. Sell ChatGPT to @NSAGov so they can scan messages better
    2. Make @NSAGov dependant on whatever ChatGPT tells them to do

    The AI that ends up enslaving humanity, will start by convincing the people in charge of turning it off, that it would be a really bad idea to turn it off.

    • Land_Strider@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      Is it hard to interpret running to Russia has the core benefit of not being extradited to the U.S. almost certainly, or at least with higher probability than any other country?

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      Yes but “pledge allegiance to Putin” is needed for becoming a citizen. He was no longer safe anywhere else.

      He can’t go anywhere that the US has control over which is pretty much everywhere but US enemies

        • ArcaneSlime@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          ·
          16 days ago

          Well actually he was otw to south america iirc when the US revoked his passport stranding him in Russia, so technically they were the American Government’s choices.

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          16 days ago

          True, but that is part of becoming a whistle blower. Someone had to leak proof of mass surveillance so that we could do something about it.

      • Tja@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        17 days ago

        My dude, IKEA has an in-house AI model. Every insurance company has one. Subway (the sandwich shop) has one.

        Saying that the NSA “supposedly” has an AI model that can search through data is like saying they “maybe” have a coffee machine.

  • earmuff@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    So what alternatives to ChatGPT do exist? I‘m currently a premium ChatGPT user and would like to switch to another service. I don‘t care that super much about privacy, but will obviously not use OpenAI products anymore

      • earmuff@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        17 days ago

        I understand the science behind those LLM‘s and yes, for my use cases it has been very useful. I use it to cope with emotional difficulties, depression, anxiety, loss. I know it is not helping me the same as a professional would. But it helps me to just get another perspective on situations, which then helps me to understand myself and others better.

        • TheOubliette@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          17 days ago

          Oh that’s totally valid. Sometimes we just need to talk and receive the validation we deserve. I’m sorry we don’t have a society where you have people you can talk to like this instead.

          I haven’t personally used any of the offline open source models but if I were you that’s where I’d start looking. If they can be run inside a virtual machine, you can even use a firewall to ensure it never leaks info.

          • Ilandar@aussie.zone
            link
            fedilink
            arrow-up
            0
            ·
            17 days ago

            Totally valid? Getting mental health advice from an AI chatbot is one of the least valid use cases. Speak to a real human @earmuff@lemmy.dbzer0.com, preferably someone close to you or who is professionally trained to assist with your condition. There are billions of English speakers in the world, so don’t pretend we live in a society where there’s “no one to talk to”.

            • TheOubliette@lemmy.ml
              link
              fedilink
              arrow-up
              0
              ·
              17 days ago

              They have already stated that they think they should be speaking to someone but are clearly having a hard time. If a chatbot is helping them right now I’m not going to lecture them about “pretending”. I recommend the approach of a polite and empathetic nudging when someone is or may be in crisis.

              • Ilandar@aussie.zone
                link
                fedilink
                arrow-up
                0
                ·
                17 days ago

                You literally just encouraged them to continue using a chatbot for mental health support. You didn’t nudge them anywhere.

                • TheOubliette@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  17 days ago

                  I was going to let them reply first. You are being rude and dismissive of them, however. Please show your fellow humans a bit more empathy.

            • earmuff@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              17 days ago

              I think you need to chill - please don‘t be triggered by me having an option to make me feel better at the end of the day.

              Instead of assuming, you could also just ask. I am using ChatGPT complementary to a mental health professional. Both help me. ChatGPT is here 24/7 and helps me with difficult situations immediately. The mental health professional is then here to solve the problem in a therapeutic way.

              Both help me.

      • uriel238@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        17 days ago

        LLMs are less magical than upper management wants them to be, which is to say they won’t replace the creative staff that makes art and copy and movie scripts, but they are useful as a tool for those creatives to do their thing. The scary thing was not that LLMs can take tons of examples and create a Simpsons version of Cortana, but that our business leaders are super eager to replace their work staff with the slightest promise of automation.

        But yes, LLMs are figuring in advancements of science and engineering, including treatments for Alzheimer’s and diabetes. So it’s not just a parlor trick, rather one that has different useful applications that were originally sold to us.

        The power problem (LLMs take a lot of power) remains an issue.

        • TheOubliette@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          17 days ago

          I’m unaware of any substantial research on Alzheimer’s or diabetes that has been done using LLMs. As generative models they’re basically just souped up Markov chains. I think the best you could hope for is something like a meta study that is probably a bit worse than the usual kind.

          • earmuff@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            0
            ·
            17 days ago

            I agree, things that occure the most in the training data set will have the highest weights/probabilities in the Markov chain. So it is useless in finding the one, tiny relation that humans would not see.

  • SpaceCowboy@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    Hmm… seems Vladimir Putin doesn’t like ChatGPT enough to have his sock puppet write some negative comments about it.