• QuadratureSurfer@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I’m just glad to hear that they’re working on a way for us to run these models locally rather than forcing a connection to their servers…

    Even if I would rather run my own models, at the very least this incentivizes Intel and AMD to start implementing NPUs (or maybe we’ll actually see plans for consumer grade GPUs with more than 24GB of VRAM?).

    • suburban_hillbilly@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Bet you a tenner within a couple years they start using these systems as distrubuted processing for their in house ai training to subsidize cost.