• CeeBee@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    It’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally.

    I’m already doing it, but I have some higher end hardware.

      • CeeBee@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.

        Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results.

        I have some beefy hardware that I run it on, but it’s not necessary to have.