• Squizzy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    For the first time in years I thought about buying a new phone. The S23 Ultra, the previous versions had been improving significantly but the price was a factor. Then I got a promotion and figured I would splurge, the S24 Ultra, but it was all aout AI so I just stayed where I am…it does everything anyway.

  • OfficerBribe@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    They just don’t get it. Once everyone will use AI toilet and AI toothbrush they will sing a different tune.

  • paddirn@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    It seems more like a niche thing that’s useful for generating rough drafts or lists of ideas, but the results are hardly useable on their own and still require additional work to finesse them. In alot of ways, it reminds me of my days working on a production line with welding robots. Supposedly these robots could do hundreds/thousands of parts without making a mistake… BUT that was never the case and people always needed to double-check the robot’s work (different tech, not “AI”, just programmed movements, but similar-ish idea). By default, I just don’t trust really anything branded as “AI”, it still requires a human to look over what it’s done, it’s just doing a monotonous task and doing it faster than a person could, but you still can’t trust what it gives you.

  • jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I think there is potential for using AI as a knowledge base. If it saves me hours of having to scour the internet for answers on how to do certain things, I could see a lot of value in that.

    The problem is that generative AI can’t determine fact from fiction, even though it has enough information to do so. For instance, I’ll ask Chat GPT how to do something and it will very confidently spit out a wrong answer 9/10 times. If I tell it that that approach didn’t work, it will respond with “Sorry about that. You can’t do [x] with [y] because [z] reasons.” The reasons are often correct but ChatGPT isn’t “intelligent” enough to ascertain that an approach will fail based on data that it already has before suggesting it.

    It will then proceed to suggest a variation of the same failed approach several more times. Every once in a while it will eventually pivot towards a workable suggestion.

    So basically, this generation of AI is just Cliff Clavin from Cheers. Able to to sting together coherent sentences of mostly bullshit.

    • Persen@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      AI is just an excuse to lay off your employees for an objectively less reliable computer program, which somehow statistically beats us in logic.

      • markon@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        I’ve used LLMs a lot over the post couple years. Pro tip. Use it a lot and learn the models. Then they look much more intelligent as you the user becomes better. Obviously if you prompt “Write me a shell script to calculate the meaning of life, make my coffee, and scratch my nuts before 9AM” it will be a grave disappointment.

        If you first design a ball fondling/scratching robot, use multiple instances of LLMs to help you plan it out, etc. then you may be impressed.

        I think one of the biggest problems is that most people interacting with llms forget they are running on computers and that they are digital and not like us. You can’t make assumptions like you can with humans. Usually even when you do that with us you just get stuff you didn’t want because you weren’t clear enough. We are horrible at instructions and this is something I hope AI will help us learn how to do better. Because ultimately bad instructions or incomplete information doesn’t lead to being able to determine anything real. Computers are logic machines. If you tell a computer to go ride a bike at best it’ll go out and do all the work to embody itself in a robot and buy a bike and ride it. Wait, you don’t even know it did it though because you never specified for it to record the ride…

        A very few of us are pretty good at giving computers clear instructions some of the time. Also though, I have found just forcing models to reason in context is powerful. You have to know to tell it to “use a drill down tree style approach to problem solving. Use reflection and discussion to explore and find the optimal solution to reasoning through the problem.” Might still give you bad results. That is why you have to experiment. It is a lot of fun if you really just let your thoughts run wild. It takes a lot of creative thinking right now to really get the most out of these models. They should all be 110% open source and free for all. BTW Gemini 1.5 and Claude and Llama 3.1 are all great, nd Llama you can run locally or on a rented GPU VM. OpenAI I’m on the fence about but given who all is involved over there I wouldn’t say I would trust them. Especially since they want to do a regulatory capture.

        • markon@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Asking the chat models to have self-disccusion and use/simulate metacognition really seems to help. Play around with it. Often times I am deep in a chat and I learn from its mistakes, it kinda learns from my mistakes and feedback. It is all about working with and not against. Because at this time LLMs are just feed forward neural networks trained on supercomputer clusters. We really don’t even know what they are capable of fully because it is so hard to quantify, especially when you don’t really know what exactly has been learned.

          Q-learning in language is also an interesting methodology I’ve been playing with. With an imagine generator for example though, you can just add (Q-learning quality) and you may get more interesting and quality results. Which itself is very interesting to me.

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I hate the feeling that they are continuing to dump real humans who can communicate and respond to issues outside of the rigid framework when it comes to support. AI is also only as good as its data and design. It feels like someone built a self driving car, stuck it on a freshly paved and painted highway and decided it was good to go. Then you take it on an old rural road and end up hitting a tree.

  • cordlesslamp@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    To be honest, I lost all interest in the new AMD CPUs because they fucking named the thing “AI” (with zero real-world application).

    I’m in the market for a new PC next month and I’m gonna get the 7800X3D for my VR gaming needs.

  • NotMyOldRedditName@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    I’m actively turned off because they suck up my data to use it.

    I love the idea of local only AI and would use those products, and do play with local LLM/Image products.

  • Kronusdark@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    I find the tech interesting, but the rush to commercialize it was a bad idea. It’s not ready yet, total uncanny valley.

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    Listen up you kids, this old fart saw this same crap in the 70s when LCDs became common and LCD clocks became the norm. They felt that EVERYTHING needed to have an LCD clock stuck in it, lamps, radios, blocks of cheese, etc. A similar thing happened in the internet boom/bust in the late 90s where everyone needed a website, even gas stations. Now AI is the media and business darling so they are trying to stick AI in everything, partly to justify pissing away so much money on it. I can’t even do a simple search on FB because it wants to force me to use the damn meta AI instead.

    I occasionally use chat gpt to find info on error code handling and coding snippets but I feel like I’m in some sort of “can you phrase it exactly right?” contest. Anything with even the slightest vagueness to it returns useless garbage.

  • yamanii@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I really fucking hated the android update where holding the power button summons Gemini before actually giving you the shut down menu.

  • Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Who knew that new technologies that are great for businesses’ bottom lines wouldn’t also be great for consumer satisfaction.

    Say it ain’t so.

  • mm_maybe@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    <greentext>

    Be me

    Early adopter of LLMs ever since a random tryout of Replika blew my mind and I set out to figure what the hell was generating its responses

    Learn to fine-tune GPT-2 models and have a blast running 30+ subreddit parody bots on r/SubSimGPT2Interactive, including some that generate weird surreal imagery from post titles using VQGAN+CLIP

    Have nagging concerns about the industry that produced these toys, start following Timnit Gebru

    Begin to sense that something is going wrong when DALLE-2 comes out, clearly targeted at eliminating creative jobs in the bland corporate illustration market. Later, become more disturbed by Stable Diffusion making this, and many much worse things, possible, at massive scale

    Try to do something about it by developing one of the first “AI Art” detection tools, intended for use by moderators of subreddits where such content is unwelcome. Get all of my accounts banned from Reddit immediately thereafter

    Am dismayed by the viral release of ChatGPT, essentially the same thing as DALLE-2 but text

    Grudgingly attempt to see what the fuss is about and install Github Copilot in VSCode. Waste hours of my time debugging code suggestions that turn out to be wrong in subtle, hard-to-spot ways. Switch to using Bing Copilot for “how-to” questions because at least it cites sources and lets me click through to the StackExchange post where the human provided the explanation I need. Admit the thing can be moderately useful and not just a fun dadaist shitposting machine. Have major FOMO about never capitalizing on my early adopter status in any money-making way

    Get pissed off by Microsoft’s plans to shove Copilot into every nook and cranny of Windows and Office; casually turn on the Opympics and get bombarded by ads for Gemini and whatever the fuck it is Meta is selling

    Start looking for an alternative to Edge despite it being the best-performing web browser by many metrics, as well as despite my history with “AI” and OK-ish experience with Copilot. Horrified to find that Mozilla and Brave are doing the exact same thing

    Install Vivaldi, then realize that the Internet it provides access to is dead and enshittified anyway

    Daydream about never touching a computer again despite my livelihood depending on it

    </greentext>

  • Lightor@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    The irony is companies are being forced to implement it. Like our board has told us we must have “AI in our product.”. It’s literally a solution looking for a problem that doesn’t exist.