Instructions here: https://github.com/ghobs91/Self-GPT
If you’ve ever wanted a ChatGPT-style assistant but fully self-hosted and open source, Self-GPT is a handy script that bundles Open WebUI (chat interface front end) with Ollama (LLM backend).
- Privacy & Control: Unlike ChatGPT, everything runs locally, so your data stays with you—great for those concerned about data privacy.
- Cost: Once set up, self-hosting avoids monthly subscription fees. You’ll need decent hardware (ideally a GPU), but there’s a range of model sizes to fit different setups.
- Flexibility: Open WebUI and Ollama support multiple models and let you switch between them easily, so you’re not locked into one provider.
Wish I could accelerate these models with an Intel Arc card, unfortunately Ollama seems to only support Nvidia
They support AMD as well.
https://ollama.com/blog/amd-preview
also check out this thread:
https://github.com/ollama/ollama/issues/1590
Seems like you can run llama.cpp directly on intel ARC through Vulkan, but there are still some hurdles for ollama.
Interesting, I see that is pretty new. Some of the documentation must be out of date because it definitely said Nvidia only somewhere when I tested it about a month ago. Thanks for giving me hope!
And AMD
You should be able to get llama.cpp to run on Arc but I’m not sure what performance you will get. It may not be worth it.
deleted by creator