We believe the benefits of AI are too great to miss, and the risks too serious to ignore. Whether we like it or not, AI is here to stay, but the current iterations of AI reflect a failure to learn from the past. That’s why we built Lumo — a private AI assistant that only works for you, not the other way around. With no logs kept and every chat encrypted, Lumo keeps your conversations confidential and your data fully under your control — never shared, sold, or stolen.
You can start using Lumo today for free, even if you don’t have a Proton Account. Just go to lumo.proton.me and type in a query.
You can run models on AMD GPUs though
Really?
When I was looking into ollama, I could have sworn it was Nvidia or CPU. Can you point me to the docs to make it work on AMD? Running Bazzite if it matters.
Ollama only has some of the backends from llama.cpp for unknown reasons.
https://github.com/ggml-org/llama.cpp?tab=readme-ov-file#supported-backends