doodle967@lemdro.id to Privacy@lemmy.mlEnglish · 18 days agoSnowden: "They've gone full mask-off: do not ever trust OpenAI or its products"twitter.comexternal-linkmessage-square178fedilinkarrow-up10arrow-down10file-text
arrow-up10arrow-down1external-linkSnowden: "They've gone full mask-off: do not ever trust OpenAI or its products"twitter.comdoodle967@lemdro.id to Privacy@lemmy.mlEnglish · 18 days agomessage-square178fedilinkfile-text
minus-squareutopiah@lemmy.mllinkfedilinkarrow-up0·17 days agoCheck my notes https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence but as others suggested a good way to start is probably https://github.com/ollama/ollama/ and if you need a GUI https://gpt4all.io
minus-squareirreticent@lemmy.worldlinkfedilinkarrow-up0·17 days agoI’m not the person who asked, but still thanks for the information. I might give this a try soon.
minus-squareclassic@fedia.iolinkfedilinkarrow-up0·17 days agoDitto, thanks to everyone’s for their suggestions
minus-squareKnock_Knock_Lemmy_In@lemmy.worldlinkfedilinkarrow-up0·17 days ago You should have at least 16 GB of RAM available to run the 13B models, Is this gpu ram or cpu ram?
minus-squareKillingTimeItself@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up0·17 days agolikely GPU ram, there is some tech that can offload ram, but generally it’s all hosted in VRAM, this requirement will likely fade as NPUs start becoming a thing though.
Check my notes https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence but as others suggested a good way to start is probably https://github.com/ollama/ollama/ and if you need a GUI https://gpt4all.io
I’m not the person who asked, but still thanks for the information. I might give this a try soon.
Ditto, thanks to everyone’s for their suggestions
Is this gpu ram or cpu ram?
likely GPU ram, there is some tech that can offload ram, but generally it’s all hosted in VRAM, this requirement will likely fade as NPUs start becoming a thing though.