• hakunawazo@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    We need to strike back with an AI customer which alerts us if we could finally talk or chat again with a human if all automatic solutions are discussed.

  • Sam_Bass@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    AI is what we make it. That being said, there has not been a proper filtering of input for AIs learning pool. Shotgun approach may be easiest and fastest but is not bestest

    • FatCrab@lemmy.one
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      The creation, curation, and maintenence of training data is a big industry in and of itself that has been around for years. Likewise, feature engineering is an entire sub-discipline of data science and engineering unto itself. I think you might be making the mistake that chatgpt = AI.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      The button to take a photo of a plant/animal?

      “Observe”

      Hold up gang, I need to observe this species right quick

      Instant scientist cred

  • marcos@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    AI search is great.

    The more “searchey”, and less “generativey”, the better. What goes against the direction every provider is going, but it’s still great.

    • Pantsofmagic@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      I like using perplexity because the ads aren’t in your face and it’s pretty good at providing concise answers… And it doesn’t fuck with my news feed every time I look up some random thing

  • bloodfart@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Do not use ai for plant identification if it actually matters what the plant is.

    Just so ppl see this:

    DO NOT EVER USE AI FOR PLANT IDENTIFICATION IN CASES WHERE THERE ARE CONSEQUENCES TO FAILURE.

    For walking along and seeing what something is, that’s fine. No big deal if it tells you something’s a turkey oak when it’s actually a pin oak.

    If you’re gonna eat it or think it might be toxic or poisonous to you, if you want to find out what your pet or livestock ate, if you in any way could suffer consequences from misidentification: do not rely on ai.

    • merc@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      You could say the same about a plant identification book.

      It’s not so much that AI for plant identification is bad, it’s that the higher the stakes, the more confident you need to be. Personally, I’m not going foraging for mushrooms with either an AI-based plant app or a book. Destroying Angel mushrooms look pretty similar to common edible mushrooms, and the key differences can disappear depending on the circumstances. If you accidentally eat a destroying angel mushroom, the symptoms might not appear for 5 to 24 hours, and by then it’s too late. Your liver and kidney are already destroyed.

      But, I think you could design an app to be at least as good as a book. I don’t know if normal apps do this, but if I made a plant identification app, I’d have the app identify the plant, and then provide a checklist for the user to use to confirm it for themselves. If you did that, it would be just like having a friend just suggest checking out a certain page in a plant identification book.

      • Classy@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        If you’re using the book correctly, you couldn’t say the same thing. Using a flora book to identify a plant requires learning about morphology and by having that alone you’re already significantly closer to accurately identifying most things. If a dichotomous key tells you that the terminating leaflet is sessile vs. not sessile, and you’re actually looking at that on the physical plant, your quality of observation is so much better than just photographing a plant and throwing it up on inaturalist

        • Bytemeister@lemmy.world
          link
          fedilink
          Ελληνικά
          arrow-up
          0
          ·
          5 months ago

          Not to mention, the book is probably going to list look-alike plants, and mention if they are toxic. AI is just going to go “It’s this thing”.

        • Iceman@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          You can easily say the same thing. Use the image identification to get a name of the plant and google it to read about checking if the sessile is leafy or no.

      • bloodfart@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        The difference between a reference guide intended for plant identification written and edited by experts in the field for the purposes of helping a person understand the plants around them and the ai is that one is expressly and intentionally created with its goal in mind and at multiple points had knowledgeable skilled people looking over its answer and the other is complex mad libs.

        I get that it’s bad to gamble with your life when the stakes are high, but we’re talking about the difference between putting it on red and putting it on 36.

        One has a much, much higher potential for catastrophe.

      • medgremlin@midwest.social
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        The problem with AI is that it’s garbage in, garbage out. There’s some AI generated books on Amazon now for mushroom identification and they contain some pretty serious errors. If you find a book written by an actual mycologist that has been well curated and referenced, that’s going to be an actually reliable resource.

        • Sconrad122@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          Are you assuming that AI in this case is some form of generative AI? I would not ask chatgpt if a mushroom is poisonous. But I would consider using a convolutional neural net based plant identification software. At that point you are depending on the quality of the training data set for the CNN and the rigor put into validating the trained model, which is at least somewhat comparable to depending on a plant identification book to be sufficiently accurate/thorough, vs depending on the accuracy of a story that genAI makes up based on reddit threads, which is a much less advisable venture

          • medgremlin@midwest.social
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            The books on Amazon are vomited out of chat GPT. If there’s a university-curated and trained image recognition AI, that’s more likely to be reliable provided the input has been properly vetted and sanitized.

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Like I get what you’re saying but this is also hysterical to the point that people are going to ignore you.

      Don’t use AI ever if there are consequences? Like I can’t use an AI image search to get rough ideas of what the plant might be as a jumping off point into more thorough research? Don’t rely solely on AI, sure, but it can be part of the process.

    • Fizz@lemmy.nz
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Forgo identification and eat the plant based on vibes like our ancestors.

  • blady_blah@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    I am totally looking forward to AI customer support. The current model of a person reading a scripted response is painful and fucking awful and only rarely leads to a good resolution. I would LOVE an AI support where I could just describe the problem and it gives me answers and it only asks relevant follow up questions. I can’t wait.

    • Prunebutt@slrpnk.net
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      They’re already deployed and they’re less than helpful, because LLMs are bullshitting machines.

      • blady_blah@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        5 months ago

        I already use LLMs to problem solve issues that I’m having and they’re typically better than me punching questions into Google. I admit that I’ve once had an llm hallucinate while it was trying to solve a problem for me, but the vast majority of the time it has been quite helpful. That’s been my experience at least. YMMV.

        If you think LLMs suck, I’m guessing you haven’t actually used telephone tech support in the past 10 years. That’s a version of hell I wish on very few people.

        • SpaceNoodle@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          If all you want is something trivial that’s been done by enough people beforehand, it’s no surprise that something approaching correct gets parroted back at you.

          • blady_blah@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            That’s 99% of what I’m looking for. If I’m figuring something out by myself, I’m not looking it up on the internet.

            I’m an engineer and I’ve found LLMs great for helping me understand an issue. When you read something online, you have to translate from what the author is saying into your thinking and I’ve found LLMs are much better at re-framing information to match my inner dialog. I often find them much more useful than google searches in trying to find information.

        • Prunebutt@slrpnk.net
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          5 months ago

          If you think LLMs suck, I’m guessing you haven’t actually used telephone tech support in the past 10 years. That’s a version of hell I wish on very few people.

          I’m specifically claiming that they’re bullshit machines. i.e. they’re generating synthetic text without context or understanding. My experience with search engines and telephone support is way better than what any LLM fed me.

          There have already been cases where phone operators where replaced with LLMs which gave dangerops advice to anorexig patients.

          • blady_blah@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            I understand their limitations, but you’re overselling the negative. They’re fucking awesome for what they can do, but they have drawbacks that you must be aware of. Just as it’s lame to be an AI fanboi, it’s equally lame to be an AI luddite.

            • Prunebutt@slrpnk.net
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              5 months ago

              It’s funny tou bring up luddites, since they actually had the right idea about technology like LLMs. They were highly skilled textile workers who opposed the introducyion of dangerous medhanical looms that produced low quality goos, but were so easy to use so that a child could work them (because they wanted to employ children). They only got their bad name of backward anti-technology lunatics afterwards. But they were actually concerned for low quality technology being deployed to weaken worker’s rights, cheapen products and make bosses even richer. That’s actually the main issue I have with what’s happening with AI.

              There’s a book by Brian Merchant called “Blood in the machine” on the topic, if you’re interested. He’s also on a bunch of podcasts, if you’re not the big reader.

              I’m referring to “bullshit” in the way argued in this paper:

              Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs.

              The technology is neat. I’ll give you that. But it’s incredibly overhyped.

    • Spedwell@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      The script doesn’t go away when you replace a helpdesk operator with ChatGPT. You just get a script-reading interface without empathy and a severally hindered ability to process novel issues outside it’s protocol.

      The humans you speak to could do exactly what you’re asking for, if the business did not handcuff them to a script.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        We’re in that awkward part of AI where all the degenerates are using it in unethical ways, and it will take time for legislation and human culture to catch up. The early internet was a wild place too.

      • AVincentInSpace@pawb.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I still don’t see how AI-generated porn is any different from photoshopping someone’s face on to someone else’s naked body.

        • SkyeStarfall@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          5 months ago

          It’s less effort and typically more realistic (in the sense that it looks more real, not that it is)

          But it’s unethical either way, don’t make non-consensual porn

  • esc27@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    I’m still hoping for good customer support AI. If I’m going to be connected to someone who barely speaks English and is required to follow a prewritten script, or worse plays prerecorded messages to fake being fluent, I might as well talk to an AI, especially if it means shorter hold times.

    AI is a bad replacement for good customer service, but it could be an improvement over bad customer service.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Glad you posted this, b/c I now have a follow up to a previous comment where I shared this from Klarna (amongst other tidbits):

      So Klarna automated L1 support, did a good job at it, and saved money. Apparently they could’ve done it early without LLMs and saved even more money.

      Have you ever wanted L1 support? :)

      Guess even if not it still could give reps more time to handle your queries if they’re not telling people to click “forgot my password” when they write in saying “hey I forgot my password”.

      • pingveno@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I just gave the chat bot that was put in place at the IT department where I work at a poke. It answered my question perfectly: “How do I print from my laptop to the library?” And it’s not like the chat bot is the only route for support, but it does divert a lot of routine questions from our help desk so they can focus on questions that require a human touch. That could be people where a chat bot is not a good format or it could be a non-routine question.

  • drail@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    I am a physicist. I am good at math, okay at programming, and not the best at using programming to accomplish the math. Using AI to help turn the math in my brain into functional code is a godsend in terms of speed, as it will usually save me a ton of time even if the code it returns isn’t 100% correct on the first attempt. I can usually take it the rest of the way after the basis is created. It is also great when used to check spelling/punctuation/grammar (so using it like the glorified spellcheck it is) and formatting markup languages like LaTeX.

    I just wish everyone would use it to make their lives easier, not make other people’s lives harder, which seems to be the way it is heading.

    • webhead@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Yeah I’ve been using it to help my novice ass code stuff for my website and it’s been incredible. There’s some stuff I thought yeah I’m probably never gonna get around to this that I rocketed through in an AFTERNOON. That’s what I want AI for. Not shitty customer service.

    • Domi@lemmy.secnd.me
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Also works well for the opposite use case.

      I’m a good programmer but bad at math and can never remember which algorithms to use so I just ask it how to solve problem X or calculate Y and it gives me a list of algorithms which would make sense.

    • hoshikarakitaridia@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      With all the hot takes and dichotomies out there, it would be nice if we could have a nuanced discussion about what we actually want from AI right now.

      Not all applications are good and not all are bad. The ideas that you have for AI are so interesting, I wish we could just collect those. Would be way more helpful than all the AI shills or haters rn.

      • brbposting@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        nuanced discussion about what we actually want from AI right now.

        👆

        So on Bluesky, the non-free almost-Twitter Twitter replacement, as anti-AI as X-Twitter is pro: you see extreme anti-AI sentiment with zero room for any application of the tech, and I have to wonder if defining the tech is part of the problem.

        They do want Gmail to filter spam, right?

        They don’t hate plant ID apps, do they?

        I’m guessing they mean “I don’t need ChatGPT, which was enabled by theft, and I don’t want chatbots in other apps either.”

        But they come out saying effectively “don’t filter spam!” the way they talk. At least arguably: not like every expert in the field would use the exact same definition, but still I doubt the average absolutist is fully aware what their message may come across as.

  • _stranger_@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    I’ve learned that training a model to search your (companies) unmaintainable, unorganized, and continuously growing documentation storage is a godsend.

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      AI search is legitimately useful.

      For something like Salesforce development, you’ve got the answer spread across their old framework docs, their new framework docs, their config settings reference page, and a couple stack overflow questions.

      Copilot / Bing search has legitimately been incredibly helpful at synthesizing answers from those and providing sources so that I can verify, do more research, and ask follow up questions.

  • jaemo@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Don’t want AI bots? Stop calling 911 from the drive-thru when your fucking burger doesn’t have enough ketchup on it you goddamn mouth breathing idiots. That’s why they can’t get to the heart attack victim, so that’s why you’re going to get a bot.

    The sooner you see this as a reaction to a stimulus you are the source of, the sooner it goes away.

    • turmacar@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      5 months ago

      “Don’t want automated looms? Stop buying clothes. Buy material and make your own, as your forefathers did. Surely your neighbors will be open to your message of time and effort instead of ease.”

      Stop assuming the tragedy of the commons can be avoided by scolding the people talking about wanting to avoid it.