(Obligatory self post.) I normally don’t care enough to share my content but thought this post i wrote the other week would be of interest to this community.
Tldr from the conclusion:
- the messages sent to Lumo need to be able to be temporarily decrypted for Lumo to process them.
- Lumo’s response is generated as unencrypted text prior to be encrypted and sent back to you.
- portions of the conversation context (previous messages) get resent with each interaction.
I wish they were more transparent about this. The whole release felt kinda rushed, they usually go into much more detail about their stuff. I’m totally fine with my stuff going in and coming out of encrypted RAM, that is virtually zero-acces, maybe besides some admins that could access it following four- or six-eye principles.
But I feel like the way they market Lumo makes you expect more than they can deliver.
I also feel like they’re overselling it a bit. However if you’re going to use an LLM, this seems to be the most privacy-conscious one you can use.
There is also duck.ai, which doesn’t required an account (and there’s no way to even have an account).
Mistral, the French AI, can also be used without an account through their chat app, and they give an option to exclude your data from training models, even for free users.
I have used and can recommend both, although Lumo can also be used without an account
Edit: Well they just made a fool out of me…
Ha! Well i agree this would’ve been better early on but I’m glad to see them dig into the details beyond the marketing language.
I swear half their problems are the problem dept being out of their depth.