

Hey, just tossing in a comment here, I think this post is a good post!
Deliverer of ideas for a living. Believer in internet autonomy, dignity. I upkeep instances of FOSS platforms like this for the masses. Previously on Twitter under the same handle. I do software things, but also I don’t.
Hey, just tossing in a comment here, I think this post is a good post!
What’s your hypervisor manager? Or are you just bare metal?
For VMWare and Proxmox both, I would recommend the community edition of Veeam. It can handle up to 10 VMs for free.
If you’ve got the funds as a small-to-large business, Veeam’s first paid tier, on a yearly basis, is a solid option to backup even more.
Caveat emptor if you buy a license (or not): Veeam runs on Windows only. I have used, like, a single internal network Windows VM dedicated just to Veeam before. It has an easy to pick up UX after a little research, and the UI is clean.
Bacula is deprecated, unfortunately.
I have looked for something similar. There are a number of spaces where FOSS project lists are maintained, but they are often focused on a singular topics like ‘privacy’ or something akin, and they aren’t often parts of larger lists that can be sorted based on the conditions you mentioned above.
The closest thing, if you are interested in other possible tools that might help: Alternative.to, a crowdsourced software searching tool, which has a means of filtering to show only, say, open source projects, or sort by tags that denote stacks used, languages used, etc. (see screenshot of tags I added). It has been useful enough for my own needs when looking for what you’ve been looking for.
Either way, best of luck! I haven’t been able to find something yet, myself.
Hello! I recently deployed GPUStack, a self-hosted GPU resource manager.
It helps you deploy AI models across clusters of GPUs, regardless of network or device. Got a Mac? It can toss a model on there and route it into an interface. Got a VM on a sever somewhere? Same. How about your home PC, with that beefy gaming GPU? No prob. GPUStack is great at scaling what you have on hand, without having to deploy a bunch of independent instances of ollama, llama.ccp, etc.
I use it to route pre-run LLMs into Open WebUI, another self-hosted interface for AI interactions, via the OpenAI API that both GPUStack and Open WebUI support!