One big difference that I’ve noticed between Windows and Linux is that Windows does a much better job ensuring that the system stays responsive even under heavy load.

For instance, I often need to compile Rust code. Anyone who writes Rust knows that the Rust compiler is very good at using all your cores and all the CPU time it can get its hands on (which is good, you want it to compile as fast as possible after all). But that means that for a time while my Rust code is compiling, I will be maxing out all my CPU cores at 100% usage.

When this happens on Windows, I’ve never really noticed. I can use my web browser or my code editor just fine while the code compiles, so I’ve never really thought about it.

However, on Linux when all my cores reach 100%, I start to notice it. It seems like every window I have open starts to lag and I get stuttering as the programs struggle to get a little bit of CPU that’s left. My web browser starts lagging with whole seconds of no response and my editor behaves the same. Even my KDE Plasma desktop environment starts lagging.

I suppose Windows must be doing something clever to somehow prioritize user-facing GUI applications even in the face of extreme CPU starvation, while Linux doesn’t seem to do a similar thing (or doesn’t do it as well).

Is this an inherent problem of Linux at the moment or can I do something to improve this? I’m on Kubuntu 24.04 if it matters. Also, I don’t believe it is a memory or I/O problem as my memory is sitting at around 60% usage when it happens with 0% swap usage, while my CPU sits at basically 100% on all cores. I’ve also tried disabling swap and it doesn’t seem to make a difference.

EDIT: Tried nice -n +19, still lags my other programs.

EDIT 2: Tried installing the Liquorix kernel, which is supposedly better for this kinda thing. I dunno if it’s placebo but stuff feels a bit snappier now? My mouse feels more responsive. Again, dunno if it’s placebo. But anyways, I tried compiling again and it still lags my other stuff.

  • prof@infosec.pub
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    Ha, that’s funny. When I run some Visual Studio builds on Windows it completely freezes at times.

    Never have that issue on EOS with KDE.

    • UnculturedSwine@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      11 days ago

      I distro hop occasionally but always find myself coming back to popos. There are so many quality of life improvements that seem small but make all the difference.

  • chakli@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    I always did make -j$(nproc --ignore=1) to avoid this while building cpp code. But this problem seems to be less severe if there are a lot of cores.

  • crispy_kilt@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    nice +5 cargo build

    nice is a program that sets priorities for the CPU scheduler. Default is 0. Goes from -19, which is max prio, to +19 which is min prio.

    This way other programs will get CPU time before cargo/rustc.

    • SorteKanin@feddit.dkOP
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      It’s more of a workaround than a solution. I don’t want to have to do this for every intensive program I run. The desktop should just be responsive without any configuration.

      • boredsquirrel@slrpnk.net
        link
        fedilink
        arrow-up
        0
        ·
        12 days ago

        Yes, this is a bad solution. No program should have that privilege, it needs to be an allowlist and not a blocklist.

      • vxx@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        12 days ago

        You could give your compiler a lower priority instead of upping everything else.

        • SorteKanin@feddit.dkOP
          link
          fedilink
          arrow-up
          0
          ·
          12 days ago

          I’d still need to lower the priority of my C++ compiler or whatever else intensive stuff I’d be running. I would like a general solution, not a patch just for running my Rust compiler.

          • crispy_kilt@feddit.de
            link
            fedilink
            arrow-up
            0
            ·
            12 days ago

            How do you expect the system to know what program is important to you and which isn’t?

            The windows solution is to switch tasks very often and to do a lot of accounting to ensure fair distribution. This results in a small but significant performance degradation. If you want your system to perform worse overall you can achieve this by setting the default process time slice value very low - don’t come back complaining if your builds suddently take 10-20% longer though.

            The correct solution is for you to tell the system what’s important and what is not so it can do what you want properly.

            You might like to configure and use the auto nice deamon: https://and.sourceforge.net/

            • FizzyOrange@programming.dev
              link
              fedilink
              arrow-up
              0
              ·
              11 days ago

              How do you expect the system to know what program is important to you and which isn’t?

              Hmm

              The windows solution is to switch tasks very often and to do a lot of accounting to ensure fair distribution.

              Sounds like you have a good idea already!

      • crispy_kilt@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        12 days ago

        No. This will wreak havoc. At most at -1 but I’d advise against that. Just spawn the lesser-prioritised programs with a positive value.

          • crispy_kilt@feddit.de
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            12 days ago

            Critical operating system tasks run at -19. If they don’t get priority it will create all kinds of problems. Audio often runs below 0 as well, at perhaps -2, so music doesn’t stutter under load. Stuff like that.

              • crispy_kilt@feddit.de
                link
                fedilink
                arrow-up
                0
                ·
                12 days ago

                Default is 0. Also, processes inherit the priority of their parent.

                This is another reason why starting the desktop environment as a whole with a different prio won’t work: the compiler is started as a child of the editor or shell which is a child of the DE so it will also have the changed prio.

  • Rentlar@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    Yeah I think the philosophy of Linux is to not assume what you are going to be use it for. Why should Linux know where your priorities are better than you?

    Some people want to run their rustc, ffmpeg or whatever intensive program and don’t mind getting a coffee while that happens, or it’s running on a non-user facing server anyway, to ensure that the process happens as soon as technically possible. Mind you that your case is not an “average usecase” either, not everyone is a developer that does compilation tasks.

    So you’ve got a point that the defaults could be improved for the desktop software developer user or somehow made more easily configurable. As suggested downthread, try the nice command, an optimized scheduler or kernel, or pick a distribution equipped with that kind of kernel by default. The beauty of Linux is that there are many ways to solve a problem, and with varying levels of effort you can get things to pretty much exactly where you want them, rather than some crowdpleasing default.

    • BearOfaTime@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      There’s a setting in windows to change the priority management, most people never see it.

      By default it’s configured for user responsiveness, but you can set it for service responsiveness.

      Though this is nothing like the process priority management in Linux, it’s one setting, that frankly I’ve never seen it make any difference. At least with Linux you can configure all sorts of priority management, on the fly no less.

      Even with a server, you’d still want the UI to have priority. God knows when you do have to remote in, it’s because you gotta fix something, and odds are the server is gonna be misbehavin’ already.

      • Rentlar@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        12 days ago

        Even with a server, you’d still want the UI to have priority. God knows when you do have to remote in, it’s because you gotta fix something, and odds are the server is gonna be misbehavin’ already.

        That’s a fair point.

        I still contend that regularly using processes that hog every available cpu cycle it can get its hands on was not a common enough desktop use case that warranted changing the defaults. It should be up to the user to configure to their needs. That said, a toggle switch like the hidden windows setting you described would be nice.

    • SorteKanin@feddit.dkOP
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      Why should Linux know where your priorities are better than you?

      Because a responsive desktop is basic good UX that should never ever be questioned. That should at least be the default and if you don’t want your desktop to have special priority, then you can configure it yourself.

      pick a distribution equipped with that kind of kernel by default.

      I’m running Kubuntu, an official variant of Ubuntu which is very much a “just works” kind of distribution - yet this doesn’t just work.

      • dbx12@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        12 days ago

        What if I know it will compile for several minutes so I leave it alone to go office chair jousting? It would be fair to lock up the UI in this case.

        • SorteKanin@feddit.dkOP
          link
          fedilink
          arrow-up
          0
          ·
          12 days ago

          Sure, it could lock up the UI if there is no input for a while I suppose. But if there is still input, then it should be responsive.

          I believe it can achieve both.

  • eberhardt@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    Actually, I’ve experienced the opposite. I find Windows lagging much more often than Linux when compiling something, especially since Linux switched to the EEVDF scheduler. The most important factor that influences lag on both systems seems to be the power profile though. If I set my power profile to save battery, the system lags from time to time but if I set it to performance it basically never happens (on GNOME you can change that in the quick menu, not sure about KDE). It might be that your Windows is simply tuned more towards performance by default at the cost of higher power consumption.

  • cbazero@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    If you compile on windows server the same problem happens. The server is basically gone. So there seems to be some special scheduler configuration in windows client os.

    • SorteKanin@feddit.dkOP
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      I wonder if Linux should also provide server and desktop variants like Windows does, with different scheduler settings and such. The use cases are quite different after all, it’s kinda weird they use the same settings.

      • eyeon@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        11 days ago

        it’s typically up to the distribution to configure things like that, and many Linux distributions do come in both server and desktop or workstation variants like Ubuntu desktop vs Ubuntu server, or RHEL server vs RHEL Workstation

        I can’t say how well they tune these things as I haven’t ran them personally, but they do exist.

  • agilob@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    EDIT: Tried nice -n +19, still lags my other programs.

    yea, this is wrong way of doing things. You should have better results with CPU-pinning. Increasing priority for YOUR threads that interact all the time with disk io, memory caches and display IO is the wrong end of the stick. You still need to display compilation progress, warnings, access IO.

    There’s no way of knowing why your system is so slow without profiling it first. Taking any advice from here or elsewhere without telling us first what your machine does is missing the point. You need to find out what the problem is and report it at the source.

    • SorteKanin@feddit.dkOP
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      If that is the case, Linux will never be a viable desktop OS alternative.

      Either that needs to change or distributions targeting desktop needs to do it. Maybe we need desktop and server variants of Linux. It kinda makes sense as these use cases are quite different.

      • kenkenken@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        “Desktop” Linux exists in this state for decades. Who cares? Maybe we won’t have consumer desktops as a niche soon. Existing users are fine with that. Don’t say you are waiting that Linux will become “a viable desktop OS alternative” in next few years.

        It’s also not about “desktop and sever variants”. Desktop Linux is either conservative or underresourced. Conservatives will told you that you are wrong and there is no issue. And they are major Linux zealots. For the other side someone need to write code and do system design, and there are not many of people for that. So, it’s better not to expect a solution anytime soon, if you are not planning to work on it by yourself.

        • SorteKanin@feddit.dkOP
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          12 days ago

          “Desktop” Linux exists in this state for decades. Who cares?

          I mean, I’d like to think a lot of people care? I think a lot of people in this community would love if Linux was more widespread and less niche.

          Maybe we won’t have consumer desktops as a niche soon. Existing users are fine with that.

          “Existing users” are not fine with that (I am also an existing user). But even if they were, that is not an attitude that will make Linux into a Windows/macOS competitor.

          Don’t say you are waiting that Linux will become “a viable desktop OS alternative” in next few years.

          We need a viable desktop alternative today or very soon more now than ever before. Microsoft is tightening the noose on Windows 11 and introducing more and more enshittification. Apple also announced AI partnerships recently. We need alternatives.

          It is not good for society for operating systems to be boiled down to two mega-corporate choices. An OS is not something that can be easily made - this is not a space that a competitor can quickly enter and shake things up. If we don’t push MS/Apple off the throne, Linux will stay niche forever and society will suffer.

          • kenkenken@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            12 days ago

            Society will suffer anyway. It doesn’t make solutions magically appear. You only said why you want it, but not how to do it. To transform GNU/Linux distros into a viable desktop OS is not an easy task, especially when people don’t have a consensus about what it should be.

            • SorteKanin@feddit.dkOP
              link
              fedilink
              arrow-up
              0
              ·
              12 days ago

              Of course - I have actually lately been thinking if Linux is suffering from it’s “decentralisation”. There are so many distributions, all with their own structure and teams behind them. On the one hand, this is great, more choice is almost universally good.

              However, on the other hand, it leads to a much more fractured movement. Imagine instead of there being 100 or whatever distros, there were maybe just like… 5 or 10 or something. I feel like it’d be easier to rally under fewer flags to consolidate effort and avoid double work. But it’s just a thought I’ve had lately.

      • Miaou@jlai.lu
        link
        fedilink
        arrow-up
        0
        ·
        11 days ago

        Linux is already a popular and viable desktop OS - for its target audience.

        The downvote comes from you implying people cannot dev in Linux when its the platform of choice for this workload.

        Now surely the user experience could be polished, but advanced users are at this point used to the workflow, and basic ones will stick to Windows out of inertia no matter what. Therefore the incentive for improving this kind of things is extremely low.

        • SorteKanin@feddit.dkOP
          link
          fedilink
          arrow-up
          0
          ·
          11 days ago

          That might be the case, but that makes me sad though. That implies that Linux is only targeting technical people who are willing to tinker with all these things themselves.

          I would personally want Linux to be broader than that. I’d want it to be the option for everyone - free computing shouldn’t be limited to technical people, it should be provided to all.

  • Valmond@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    My work windoz machine clogged up quite much recompiling large projects (GB s of C/C++ code), so I set it to use 19/20 “cores”. Worked okayish but was not some snappy experience IMO (64GB RAM & SSD).

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    It really depends on your desktop. For instance gnome handles high CPU very well in my experience.

    I would run your compiler in a podman container with a CPU cap.

  • boredsquirrel@slrpnk.net
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    I experience the exact same thing.

    The key is that you need to allow processes in your oom killer. There are different implementations like oomd or earlyoom.

    Oomd freezes and doesnt kill, and I suppose distros do a bad job at allowlisting the desktop etc in there.

    • SorteKanin@feddit.dkOP
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      As I mention at the end, this situation has nothing to do with running out of memory. It’s purely CPU starvation.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      Maybe it is distro specific

      In Fedora workstation it does its job well. I sometimes run two many VMs at once and it hangs for a second before killing the VM