Yeah! Jobs that bros can reciprocate! There should be a word for that…
Yeah! Jobs that bros can reciprocate! There should be a word for that…
cinnamon, gnome, xfce? Many flavors of Mint
Been living with my parents for almost two years again. Thought about renovating a connex as a living space for breakfast just yesterday. Might snack on my last marbles tonight.
The preferred alternative is a healthy relationship after enough therapy, the latter being a [pay]wall for some
We didn’t get this far without feeling that way. Is only natural
What a treat! I just got done setting up a second venv within the sd folder. one called amd-venv the other nvidia-venv. Copied the webui.sh and webui-user.sh scripts and made separate flavors of those as well to point to the respective venv. Now If I just had my nvidia drivers working I could probably set my power supply on fire running them in parallel.
I had that concern as well with it being a new card. It performs fine in gaming as well as in every glmark benchmark so far. I have it chalked up to amd support being in experimenntal status on linux/SD. Any other stress tests you recommend while I’m in the return window!? lol
I might take the docker route for the ease of troubleshooting if nothing else. So very sick of hard system freezes/crashes while kludging through the troubleshooting process. Any words of wisdom?
Since only one of us is feeling helpful, here is a 6 minute video for the rest of us to enjoy https://www.youtube.com/watch?v=lRBsmnBE9ZA
I started reading into the ONNX business here https://rocm.blogs.amd.com/artificial-intelligence/stable-diffusion-onnx-runtime/README.html Didn’t take long to see that was beyond me. Has anyone distilled an easy to use model converter/conversion process? One I saw required a HF token for the process, yeesh
How bad are your crashes? Mine will either freeze the system entirely or crash the current lightdm session, sometimes recovering, sometimes freezing anyway. Needs power cycle to rescue. What is the DE you speak of? openbox?
Well I finally got the nvidia card working to some extent. On the recommended driver it only works in lowvram. medvram maxes vram too easily on this driver/cuda version for whatever reason. Does anyone know the current best nvidia driver for sd on linux? Perhaps 470, the other provided by the LM driver manager…?
Is that perhaps the setting of letterkenny?
So what was the conspiracy theory around tpm requirements, bitlocker and copilot? Some new privacy nightmare?
Are we going to war or is the author bad at writing?
I’m putting ten in the bathtub and naming it Steve.
Because some smart TVs will up and brick themselves by irreparably filling their storage with various updates to the point of no longer being able to install or even update anything on the TV whatsoever THANK YOU Samsung)
Interesting. Uses a pi, motors, laser pointer and timing components to control interference and flip bits in transistors. Speedrunners just got a new game changer.
I’ll definitely be keeping my nvidia card for ai/ml /cuda purposes. It’ll live in a dual boot box for windows gaming when necessary (bigscreen beyond, for now). II am curious to see what 16gb of amd vram will let me get up to anyway.
Personally I hear it had been a mixed bag. Hopefully time has refined this… Old stories about digging up old LOS images, bare minimal patches, and release under e branding with no consideration for security/hardening. Buuut that was info from a grapheneos vs eos forum, or something. Do your research, you know what sub you’re in.