Hi, so I’ve ended up bagging myself a big supermicro server. I’m wanting to try out a little bit of everything with it, but one thing I really want is to be able to have services that haven’t been used for a bit to stop or sleep. And then to wake up again or start up on request, rather than me having manually stop and start services. Is that a thing?
I know of portainer and whatnot, but I’m wondering if anyone has any advice on this.
I’m planning on putting debian on it i think (unless someone can convince something else is better suited - i usually use arch on my personal devices btw 😜)
Also i know some basics on raid but I’ve only ever messed with raid0 with usb drives on a pi. I have 8 bays but 2 are currently vacant. What is the process of just adding an extra drive to a raid, or replacing one that already exists?
Jerboa crashed mid-comment so i’ll be brief.
Save yourself pain and increase your happiness by
- using btrfs or zfs (snapshots, checksum and self-healing is great)
- using declarative approach rather than imperative, and keep a copy of configs elsewhere (I accidentally nuked my system multiple times, you should expect to do the same)
- keeping backups. If zfs, https://github.com/jimsalterjrs/sanoid and syncoid are great https://discourse.practicalzfs.com/t/setting-up-syncoid-for-offsite-backup/1611
- have an extra tiny machine running the same system and workloads, where you test potentially risky stuff before doing so on the prod server
- metrics solutions like prometheus and grafana are your friend
If you want to have VMs as well, Proxmox is the to-go thing in selfhosting. Maybe your supermicro even has two network interfaces and can have a virtualized firewall or the like.
Not quite sure about your services go sleep thing. Ideally, services won’t use much CPU while idling, but certainly RAM. You can probably build something like you described, but it’s mostly not “a thing” afaik.
Ah so i might be thinking leaving a service running is worse than it actually is then?
The motherboard has two networks ports and a card with another two. There’s also some fibre ports but i imagine I’ll never end up using them haha.
I don’t actually really know much about firewalls at all yet though
For the firewall you can try OPNsense or OpenWrt
You write up a procedure for the setup of your server and any virtual machines contained within.
Using declarative Distros makes the procedure shorter and easier to maintain in the long run.
Then you use it to setup your system (fixing issues in your procedure along the way)
Then wipe and do it again (this time should be done without issue or you may need another spin)
Then slowly grow your documentation and what services you have running.
That’s a good idea to run through it a few times before using it proper, thanks.
inetd
,xinetd
et al were how this was done back in the day.many services use very little energy when they are not actively being used. that’s definitely not true across the board though.
I echo the suggestion of Proxmox.
I think portainer is probably the best tool for this since you can easily go in and pause/start services as required. Just make sure to go into the containers on portainer and check the restart policy is set to “unless stopped” so you don’t get unwanted restarts after a reboot or anything like that.
I don’t think portainer has any automation options but you could possibly write a short cron script to run
docker compose down
in the directory of each compose file to shut them down once a month, and pair that with the uptime kuma container to get a notification when your containers are down so you can go into portainer and restart the ones you still need. Though I’ve never had any real issue with running lots of containers at once – there’s 20 on my raspberry pi right now and it’s still got just over a gigabyte of RAM left.