Whenever AI is mentioned lots of people in the Linux space immediately react negatively. Creators like TheLinuxExperiment on YouTube always feel the need to add a disclaimer that “some people think AI is problematic” or something along those lines if an AI topic is discussed. I get that AI has many problems but at the same time the potential it has is immense, especially as an assistant on personal computers (just look at what “Apple Intelligence” seems to be capable of.) Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete. Using an AI-less desktop may be akin to hand copying books after the printing press revolution. If you think of specific problems it is better to point them out and try think of solutions, not reject the technology as a whole.

TLDR: A lot of ludite sentiments around AI in Linux community.

  • Rozaŭtuno@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    I get that AI has many problems but at the same time the potential it has is immense, especially as an assistant on personal computers

    [Citation needed]

    Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete.

    And this mentality is exactly what AI sceptics criticise. The whole reason why the AI arms race is going on is because every company/organisation seems convinced that sci-fi like AI is right behind the corner, and the first one to get it will capture 100% of the market in their walled garden while everyone else fades into obscurity. They’re all so obsessed with this that they don’t see a problem with putting in charge a virtual dumbass that is constantly wrong.

  • Ramin Honary@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    18 days ago

    No, it is because people in the Linux community are usually a bit more tech-savvy than average and are aware that OpenAI/Microsoft is very likely breaking the law in how they collect data for training their AI.

    We have seen that companies like OpenAI completely disregard the rights of the people who created this data that they use in their for-profit LLMs (like what they did to Scarlett Johansson), their rights to control whether the code/documentation/artwork is used in for-profit ventures, especially when stealing Creative Commons “Share Alike” licensed documentation, or GPL licensed code which can only be used if the code that reuses it is made public, which OpenAI and Microsoft does not do.

    So OpenAI has deliberately conflated LLM technology with general intelligence (AGI) in order to hype their products, and so now their possibly illegal actions are also being associated with all AI. The anger toward AI is not directed at the technology itself, it is directed at companies like OpenAI who have tried to make their shitty brand synonymous with the technology.

    And I haven’t even yet mentioned:

    • how people are getting fired by companies who are replacing them with AI
    • or how it has been used to target civilians in war zones
    • or how deep fakes are being used to scam vulnerable people.

    The technology could be used for good, especially in the Linux community, but lately there has been a surge of unethical (and sometimes outright criminal) uses of AI by some of the worlds wealthiest companies.

  • groucho@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    As someone whose employer is strongly pushing them to use AI assistants in coding: no. At best, it’s like being tied to a shitty intern that copies code off stack overflow and then blows me up on slack when it magically doesn’t work. I still don’t understand why everyone is so excited about them. The only tasks they can handle competently are tasks I can easily do on my own (and with a lot less re-typing.)

    Sure, they’ll grow over the years, but Altman et al are complaining that they’re running out of training data. And even with an unlimited body of training data for future models, we’ll still end up with something about as intelligent as a kid that’s been locked in a windowless room with books their whole life and can either parrot opinions they’ve read or make shit up and hope you believe it. I’ll think we’ll get a series of incompetent products with increasing ability to make wrong shit up on the fly until C-suite moves on to the next shiny bullshit.

    That’s not to say we’re not capable of creating a generally-intelligent system on par with or exceeding human intelligence, but I really don’t think LLMs will allow for that.

    tl;dr: a lot of woo in the tech community that the linux community isn’t as on board with

  • rah@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    18 days ago

    free software communities

    TheLinuxExperiment on YouTube

    LOL

  • Kericake🥕 (They(/It))@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    [Sarcastic ‘translation’] tl;dr: A lot of people who are relatively well-placed to understand how much technology is involved even in downvoting this post are downvoting this post because they’re afraid of technology!

    Just more fad-worshipping foolishness, drooling over a buzzword and upset that others call it what it is. I want it to be over but I’m sure whatever comes next will be just as infuriating. Oh no, now our cursors all have to change according to built-in (to the cursor, somehow, for some reason) software that tracks our sleep patterns! All of our cursors will be obsolete (?!??) unless they can scalably synergize with the business logic core to our something or other 😴

    • Sekki@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      18 days ago

      Using “AI” has been beneficial for example to generate image descriptions automatically, which were then used as alternative text on a website. This increased accessibility AND users were able to use full text search on these descriptions to find images faster. Same goes for stuff like classification of images, video and audio. I know of some applications in agriculture where object detection and classification etc. is used to optimize the usage of fertilizer and pesticides reducing costs and reducing environmental impact they cause. There are ofcourse many more examples like these but the point should be clear.

  • chronicledmonocle@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    I’m not against AI. I’m against the hoards of privacy-disrespecting data collection, the fact that everybody is irresponsibility rushing to slap AI into everything even when it doesn’t make sense because line go up, and the fact nobody is taking the limitations of things like Large Language Models seriously.

    The current AI craze is like the NFTs craze in a lot of ways, but more useful and not going to just disappear. In a year or three the crazed C-level idiots chasing the next magic dragon will settle down, the technology will settle into the places where it’s actually useful, and investors will stop throwing all the cash at any mention of AI with zero skepticism.

    It’s not Luddite to be skeptical of the hot new craze. It’s prudent as long as you don’t let yourself slip into regressive thinking.

    • Handles@leminal.space
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      Completely agree and I’ll do you one better:

      What is being sold as AI doesn’t hold a candle to actual artificial intelligence, they’re error prone statistical engines incapable of delivering more than the illusion of intelligence. The only reason they were launched to the public is that corporations were anxious not to be the last on the market — whether their product was ready or not.

      I’m happy to be a Luddite if it means having the capacity for critical thought to Just Not Use Imperfect Crapware™.

  • Antiochus@lemmy.one
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    You’re getting a lot of flack in these comments, but you are absolutely right. All the concerns people have raised about “AI” and the recent wave of machine learning tech are (mostly) valid, but that doesn’t mean AI isn’t incredibly effective in certain use cases. Rather than hating on the technology or ignoring it, the FOSS community should try to find ways of implementing AI that mitigate the problems, while continuing to educate users about the limitations of LLMs, etc.

    • crispy_kilt@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      16 days ago

      It’s spelled flak, not flack. It’s from the German word Flugabwehrkanone which literally means aerial defense cannon.

      • Antiochus@lemmy.one
        link
        fedilink
        arrow-up
        0
        ·
        15 days ago

        Oh, that’s very interesting. I knew about flak in the military context, but never realized it was the same word used in the idiom. The idiom actually makes a lot more sense now.

  • luciferofastora@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    The first problem, as with many things AI, is nailing down just what you mean with AI.

    The second problem, as with many things Linux, is the question of shipping these things with the Desktop Environment / OS by default, given that not everybody wants or needs that and for those that don’t, it’s just useless bloat.

    The third problem, as with many things FOSS or AI, is transparency, here particularly training. Would I have to train the models myself? If yes: How would I acquire training data that has quantity, quality and transparent control of sources? If no: What control do I have over the source material the pre-trained model I get uses?

    The fourth problem is privacy. The tradeoff for a universal assistant is universal access, which requires universal trust. Even if it can only fetch information (read files, query the web), the automated web searches could expose private data to whatever search engine or websites it uses. Particularly in the wake of Recall, the idea of saying “Oh actually we want to do the same as Microsoft” would harm Linux adoption more than it would help.

    The fifth problem is control. The more control you hand to machines, the more control their developers will have. This isn’t just about trusting the machines at that point, it’s about trusting the developers. To build something the caliber of full AI assistants, you’d need a ridiculous amount of volunteer efforts, particularly due to the splintering that always comes with such projects and the friction that creates. Alternatively, you’d need corporate contributions, and they always come with an expectation of profit. Hence we’re back to trust: Do you trust a corporation big enough to make a difference to contribute to such an endeavour without amy avenue of abuse? I don’t.


    Linux has survived long enough despite not keeping up with every mainstream development. In fact, what drove me to Linux was precisely that it doesn’t do everything Microsoft does. The idea of volunteers (by and large unorganised) trying to match the sheer power of a megacorp (with a strict hierarchy for who calls the shots) in development power to produce such an assistant is ridiculous enough, but the suggestion that DEs should come with it already integrated? Hell no

    One useful applications of “AI” (machine learning) I could see: Evaluating logs to detect recurring errors and cross-referencing them with other logs to see if there are correlations, which might help with troubleshooting.
    That doesn’t need to be an integrated desktop assistant, it can just be a regular app.

    Really, that applies to every possible AI tool. Make it an app, if you care enough. People can install it for themselves if they want. But for the love of the Machine God, don’t let the hype blind you to the issues.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      It doesnt though, local models would be at the core of FOSS AI, and they dont require you to trust anyone with your data.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        17 days ago

        local models would be at the core of FOSS AI, and they dont require you to trust anyone with your data.

        Would? You’re slipping between imaginary and apparently declarative statements. Very typical of “AI” hype.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          17 days ago

          Local models WOULD form the basis of FOSS AI. Supposition on my part but entirely supportable given there is already a open source model movement focus on producing local models and open source software is generally privacy focused.

          Local models ARE inherently private due to the way that no information leaves the device it is processed on.

          I know you dont want to engage with arguments and instead just wail at the latest daemon for internet points, but you can have more than one statement in a sentence without being incoherent.

  • HubertManne@moist.catsweat.com
    link
    fedilink
    arrow-up
    0
    ·
    18 days ago

    As I mentioned in another comment we have an example of something like an ai-less desktop anology wise. gui-less installs. They are generally called server version of the distro and are used in datacenters but im 100% sure there are individuals out there running laptops with no gui. Im find with FOSS ai and there are LLM’s licensed as such. That being said they are still problematic since the training requires large amounts of data that companies are not exactly strigent with collection.

  • umami_wasabi@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    17 days ago

    Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete.

    I don’t get it. How Linux would become obsolete if it don’t have native AI toolsets on DMs? It’s not like Linux desktop have a 80% market share. People who run Linux desktop as daily drivers are still niche, and most don’t even know Linux exists. They grown up with Microsoft and Apple shoving ads down their throat, and that’s all they know. If I need AI, I will find ways to intergrate to my workflow, not by the dev thinks I need it.

    And if you really need something like MS’s Recall, tgere is a FOSS version of it.

    • SuperSpruce@lemmy.zip
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      Is OpenRecall secure as well? One of my biggest problems with MS recall is that it stores all your personal info in plain text.

      • callcc@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        17 days ago

        A floss project’s success is not necessarily marked by its market share but often by the absolute benefit it gives to its users. A project with one happy user and developer can be a success.

  • Sims@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    I agree. However, I think it is related to Capitalism and all the sociopathic corporations out there. It’s almost impossible to think that anything good will come from the Blue Church controlling even more tech. Capitalism have always used any opportunity to enslave/extort people - that continues with AI under their control.

    However, I was also disappointed when I found out how negative ‘my’ crowd were. I wanted to create an open source lowend AGI to secure poor people a descent life without being attacked by Capitalism every day/hour/second, create abundance, communities, production and and in general help build a social sub society in the midst of the insane blue church and their propagandized believers.

    It is perfectly doable to fight the Capitalist religion with homegrown AI based on what we know and have today. But nobody can do it alone, and if there’s no-one willing to fight the f*ckers with AI, then it takes time…

    I definitely intend to build a revolution-AGI to kill off the Capitalist religion and save exploited poor people. No matter what happens, there will be at least one AGI that are trained on revolution, anti-capitalism and building something much better than this effing blue nightmare. The worlds first aggressive ‘Commie-bot’ ha! 😍

  • daniyeg@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    18 days ago

    personally im fine with machine learning, what I don’t like is “AI”, a new marketing buzzword that justifies every shitty corporate exec decision and insane company evaluations.