So I have rebuilt my Production rack with very little in terms of an actual software plan.
I host mostly docker contained services (Forgejo, Ghost Blog, OpenWebUI, Outline) and I was previously hosting each one in their own Ubuntu Server VM on Proxmox thus defeating the purpose.
So I was going to run a VM on each of these Thinkcentres that worked as a Kubernetes Cluster and then ran everything on that. But that also feels silly since these PCs are already Clustered through Proxmox 9.
I was thinking about using LXC but part of the point of the Kubernetes cluster was to learn a new skill that might be useful in my career and I don’t know how this will work with Cloudflared Tunnels which is my preferred means of exposing services to the internet.
I’m willing to take a class or follow a whole bunch of “how-to” videos, but I’m a little frazzled on my options. Any suggestions are welcome.
That’s a sick little rack.
Absolutely follow through with K8S, I recently did this and it’s definitely worth it.
Running the workers in VMs is a little wasteful. But it’s simplifies your hardware and your backups. My home lab version is 3 VMs in a Proxmox, The idea is, after it’s built and working I can just move those VMs wholesale to other boxes. But realistically, adding workers to K8S is pretty brain dead simple, and draining and migrating the old worker nodes another skill you should be learning.
You could throw Debian on everything and deploy all your software through Ansible.
Don’t lose sight of the goal. Get k8s running, push through longhorn, get some pods up in full tolerant mode, learn the networking, The engress the DNS, load balancing, proxies.
Exactly how you do it is less important than the act of doing it and learning kubectl.
If you’re running Kubernetes, what is the point of LXC or Proxmox in this setup? Kubernetes will give better scaling and utilization.
True but I’m wanting to run VMs for specific tasks as well (mostly game servers) and I like to virtualize Linux distros I wanna play with as well as keep a running windows VM for the off chance I need windows ever.
Damn that’s a good looking mini rack! Great job!
I don’t have much experience or advice about Proxmox, just wanted to show appreciation ✌️
Was going to say the same ! Such a cutsy nice little mini-rack/server setup !
Still fighting my spaghetti setup with cables sprouting everywhere.
Just gotta say that looks like a really nice setup. Mine looks like a small rats nest!
The cables seem to increase exponentially don’t they? First, you have a few computers and a half dozen cables powering things and linking everything together, then you add a couple servers, maybe a second nic on your NAS, and another switch or two since things are now further from the router. Suddenly your office looks like a giant bowl of spaghetti covered in prop 65 stickers.
The rats nest is behind itI need to re do some of the wiring.
I have all 4 power cables braided and zip tied together with the single data cable so its nice to pull out and put back into the entertainment center.
Only problem is I only had four 1 foot Ethernet cables and three 7 foot cables. So I used the 1 footers for the Pis and the 7 footers are bundled up as best I could and neatly hidden.
I’m waiting on some color coordinated .5 foot cables from Cables and Kits and I will swap the switch and patch panel. I want the Pis to have that one cable and that’s it, but I also want all the patch panel ports to work.
Just for fun, and for a layman’s benefit (me)…
Can you eli5?
Yeah!
So i am running these three computers in a set up that let’s me manage virtual machines on them from a website with Proxmox.
I want to play with a tool that let’s me run Docker Containers. Containers being a way to host services like websites and web apps without having to make a Virtual machine for each app.
This has a lot of advantages but I’m trying to use the High Availability feature when you run these on a cluster of computers.
My problem is that I know I can use the Built In container software in the already clustered Proxmox computers called LXC Linux Containers. However, I want to use a container software called Kubernetes but I would have to build Virtual machines on my servers and then cluster those virtual machines.
Its a little confusing because I have three physical computers clustered together and I’m trying to then build three virtual computers on them and cluster those. Its an odd thing to do and that’s the problem.
It’s not odd. You’ll need to build the 3 VMs if you want to run Kubernetes and not destroy your existing hypervisor.
Second
Done. Check parent comment!
Nah, use one VM on each node as the kube host. That’s fine. You’re doing it for fun, you don’t need to min-max your environment.
You’ll probably want to tear it down and redeploy it eventually anyway. That’s going to be a pain if you’ve installed them on bare metal.
Fair point. I was also thinking it would be fun to use CoreOS so I can get one step closer to ArchBTW
Do you know about the power draw?
I just recently tried to setup k3s in proxmox LXC containers. I had to do everything again after I learned it was not possible to make this setup without comproimising security and isolation. Now I run kubernetes inside virtual machines in proxmox.
That’s what I was thinking too. Ijust feel better having another layer between the open web an my server
To setup kubernetes inside lxc you have to enable quite some capabilities inside host kernel and lxd containers that can be used to escalate privileges from beeing root in container to root in proxmox. Not completely sure but since even containerd containers share the same kernel, attacker might escalate directly from pod to proxmox host. But this last par I am not sure about.
Nice and tidy setup. What specs do the Thinkcenters have? And what rack is that?
This is a 52pi 10 inch rack 8U
I have 4 raspberry Pi4’s 4gb running with POE
Some TP-link gigabit switch with 4/poe ports
3 Thinkcentre Tiny with a ryzen 5 2400GE 32 gigs ddr4 RAM, 512 Sabrent PCIE Gem 4 NVME boot/VM drive, 512 PNY Sata SSDs for databases
I have a bigger server for AI stuff and storage. This is just Tue “production” server for my websites and Git repos
I stole the set up idea from my man Jan Wildeboer
Yes OP, show us your rack (details)
Rack shown
If you decide to go the Kunernetes route, you can try k3sup to bootstrap your VMs k3s, it a nice half step abstraction between Ansible and running curl yourself:
https://github.com/alexellis/k3sup
I’ve landed on k3s as my k8s distro in my environment for a number of reasons. It seems to have the “mindshare” of selfhosters, and theres lots of k3s documentation to peruse. I also really like that you can preload manifest files if you do decide to use Ansible, which makes cluster deploys that much more organized.
If you want to go a little off beat, you could try “Canonical K8s (not Microk8s)” as a snap. That worked REALLY well, and lets you do cool shit like “k8s enable loadbalancer” to automatically enable whole components for you, if you just want to focus on “consuming” Kubernetes instead of building it. I did notice a little overhead doing it as a snap, but my Proxmox node that runs the VM is purposely low spec (Celeron quad core if you believe it, 7 tdp tho)…so your hardware wouldn’t likely notice a difference.
https://documentation.ubuntu.com/canonical-kubernetes/release-1.32/snap/tutorial/getting-started/
If youre doing Proxmox already, if you don’t already have a VM template and/or Terraform/OpenTofu with Proxmox operator…it may help to tool on that too. Easier to destroy/build VMs when you get frustrated.
Not sure why but I had an absolutely horrible time trying to setup k3s HA on 4 raspberry pis. After several hours I eventually gave up and decided to try microK8s and it worked instantly. 🤷♂️
OP, If you dont have a proxmox vm template ready to go, here is a great starting place using cloudinit:
https://github.com/UntouchedWagons/Ubuntu-CloudInit-Docs
You can use this with the proxmox gui cloudinit config as well to add your ssh key to each vm/etc.
I’d still run k8s inside a proxmox VM. Even if it’s basically all resources dedicated to the VM, proxmox gives you a huge amount of oversight and additional tooling.
Proxmox doesn’t have to do much (or even anything), beyond provide a virtual machine.I’ve ran Talos OS (dedicated k8s distro) bare metal. It was fine, but I wish I had a hypervisor. I was lucky that my project could be wiped and rebuilt with ease. Having a hypervisor would mean I could’ve just rolled back to a snapshot, and separated worker/master nodes without running additional servers.
This was sorely missed when I was both learning the deployment of k8s, and k8s itself.
For the next project that is similar, I’ll run talos inside proxmox VMs.As far as “how does cloudflare work in k8s”… However you want?
You could manually deploy the example manifests provided by cloudflare.
Or perhaps there are some helm charts that can make it all a bit easier?Or you could install an operator, which will look for Custom Resource Definitions or specific metadata on standard resources, then deploy and configure the suitable additional resources in order to make it work.
https://github.com/adyanth/cloudflare-operator seems popular?I’d look to reduce the amount of yaml you have to write/configure by hand. Which is why I like operators
Quality answer. Glad my hunch was backed up by your experience. That’s very appreciated.
I hadn’t tried anything with Cloudflared and Kubernetes yet so it would be sick to see it just work.
I think Cloudflare Tunnels will require a different setup on k8s than on regular Linux hosts, but it’s such a popular service among self-hosters that I have little doubt that you’ll find a workable process.
(And likely you could cheat, and set up a small Linux VM to “bridge” k8s and Cloudflare Tunnels.)
Kubernetes is different, but it’s learnable. In my opinion, K8S only comes into its own in a few scenarios:
-
Really elastic workloads. If you have stuff that scales horizontally (uncommon), you really can tell Amazon to give you more Kubernetes nodes when load grows, and destroy the nodes when load goes down. But this is not really applicable for self hosting, IMHO.
-
Really clustered software. Setting up say a PostgreSQL cluster is a ton of work. But people create K8S operators that you feed a declarative configuration (I want so many replicas, I want backups at this rate, etc.) and that work out everything for you… in a way that works in all K8S implementations! This is also very cool, but I suspect that there’s not a lot of this in self-hosting.
-
Building SaaS platforms, etc. This is something that might be more reasonable to do in a self-hosting situation.
Like the person you’re replying to, I also run Talos (as a VM in Proxmox). It’s pretty cool. But in the end, I only run there 4 apps I’ve written myself, so using K8S as a kind of SaaS… and another application, https://github.com/avaraline/incarnator, which is basically distributed as container images and I was too lazy to deploy in a more conventional way.
I also do this for learning. Although I’m not a fan of how Docker Compose is becoming dominant in the self-hosting space, I have to admit it makes more sense than K8S for self-hosting. But K8S is cool and might get you a cool job, so by all means play with it- maybe you’ll have fun!
-
Running the k8s in their own VM will allow you to hedge against mistakes and keep some separation between infra and kube.
I personally don’t use proxmox anymore, but I deploy with ansible and roles, not k8s anymore.
Ansible is next on my list of things to learn.
I don’t think I’ll need to dedicate all of my compute space to K8s probably just half for now.
Ansible is next on my list of things to learn.
Ansible is y2k tech brought to you in 2010. Its workarounds for its many problems bring problems of their own. I’d recommend mgmtconfig, but it’s a deep pool if you’re just getting into it. Try Chef(cinc.sh) or saltstack, but keep mgmtconfig on the radar when you want to switch from 2010 tech to 2020 tech.
My issue with mgmt.config is that it bills itself as an api-driven “modern” orchestrator, but as soon as you don’t have systemd on clients, it becomes insanely complicated to blast out simple changes.
Mgmt.config also claims to be “easy”, but you have to learn MCL’s weird syntax, which the issue I have with chef and its use of ruby.
Yes, ansible is relatively simple, but it runs on anything (including being supported on actual arm64) and I daresay that layering roles and modules makes ansible quite powerful.
It’s kind of like nagios… Nagios sucks. But it has such a massive library of monitoring tricks and tools that it will be around forever.
have to learn MCL’s weird syntax
You skewer two apps for syntax, but not Ansible’s fucking YAML? Dood. I’m building out a layered declarative config at the day-job, and it’s just page after page with python’s indentation fixation and powershell’s bipolar expressions. This is better for you?
deleted by creator
Wow, huge disagree on saltstack and chef being ahead of Ansible. I’ve used all 3 in production (and even Puppet) and watched Ansible absolutely surge onto the scene and displace everyone else in the enterprise space in a scant few years.
Ansible is just so much lower overhead and so much easier to understand and make changes to. It’s dominating the configuration management space for a reason. And nearly all of the self hosted/homelab space is active in Ansible and have tons of well baked playbooks.
I’ve used all 3 in production (and even Puppet) and watched Ansible absolutely surge onto the scene and displace everyone else in the enterprise space in a scant few years.
Popular isn’t always better. See: Betamax/VHS, Blu-ray vs HDDVD, skype/MSSkype, everything vs Teams, everything vs Outlook, everything vs Azure. Ansible is accessible like DUPLO is accessible, man, and with the payola like Blu-ray got and the pressuring like what shot systemd into the frame, of course it would appeal to the C-suite.
Throwing a few-thousand at Ansible/AAP and the jagged edges pop out – and we have a team of three that is dedicated to Nagios and AAP. And it’s never not glacially slow – orders of magnitude slower than absolutely everything.
Yeah, similar sized environments here too, but had good experiences with Ansible. Saw Chef struggle at even smaller scales. And Puppet. And Saltstack. But I’ve also seen all of them succeed too. Like most things it depends on how you run it. Nothing is a perfect solution. But I think Ansible has few game breaking tradeoffs for it’s advantages.
I’m trying to get into self hosting but I’m really completely lost. Do you have some advices about where to start from?
Just a tip for hardware: don’t buy anything unless you really know what you need. Just start tinkering with some old computer/laptop. Most services will run fine on anything up to ~10 years old stuff
Once you have stuff running on an old computer you’ll get to know what you actually need and can spend your money more intelligently. If you do buy anything, buy an ~8 year old corporate desktop. They’re cheap as chips because they’re close to ewaste, but 4/6th Gen Intel systems have enough performance to really do a ton with in the homelab scene
The only thing I bought was a switch and a NAS, both second hand. You can spend a lot for nothing in return.
I stared a year ago, from scratch. Fumbled around with a Raspberry for a few months and then bought a mini PC for 100 euros. (Lenovo tiny m73 with 8GB of RAM and a 500 GB ssd) That’s all you need.
Proxmox is a great way to go because it’s quite easy to create and delete virtual machines. You’ll be starting over quite a bit in the beginning.
I recommend documenting your stuff so you can easily start over. Claude.ai has been a great help for me to troubleshoot. AI is awesome to get the typos out of your config files.
I always point people here: https://youtu.be/uPYjJYQEFSg
Hard to give you hints when we don’t know what your background is, so here is some basics:
For starting selfhosting I’d recommend getting comfortable with the linux command line at first (this may help: https://www.linuxcommand.org/). Set up a VM in Virt-manager / VirtualBox / VMWare / whatever hypervisor you want, install a Linux image (I’d recommend plain Debian without desktop environment). Now you have a sandbox where you can toy around. If you’re on windows you can use WSL2. If you’re already on a linux desktop, toy around there.
If you already got some hardware like a raspberry pi or old Laptop, get that up and running with a distro of your choice, plug it into your network and SSH into it, then you have got your playground there. Get the basic commands in like ls, pwd, cat, tail, touch, mkdir, rm, … And some things you can do with them. Check out their respective man-pages.
After that, install some packages, change configs (I’d recommend nano over vim for starters). From now on, there are no boundaries of what to do. Set up your first basic webserver with apache / nginx / caddy, install docker / podman and containerize / get some images, set up pihole, nextcloud, jellyfin, do whatever you like… Congratulations, you are now “self hosting”.
Maybe some day switch that Raspberry pi out for a thin client as seen in the picture from OP and install a hypervisor like Proxmox on it. If you got all that, which may take a while, you can consider networking and firewalls IMHO (you could get a cheap router that supports OpenWRT to learn about these things). Don’t open ports to the internet as long as you’re not 100% sure what you are doing. You can set up a VPN with DynDNS on most modems / routers connected to your ISP though, opening up your self hosted services only to you / anyone with access. Or use something like Tailscale / Twingate.
I could go on, but like I said, self hosting and home labbing is kind of use case / requirement specific.
I would say figure out what you actually want to do. Do you want to host a website, run a media server, have a wiki, document storage? Then find the application thats appropriate for it. See what the possible installation methods are and choose whatever you are comfortable with.
As you dive more into it and get comfortable with things and your needs increase you will eventually fall into the hole 🙂
This is pretty rad! Thanks for sharing. I went down the same road with learning k3s on about 7 Raspberry Pis and pivoted over to Proxmox/Ceph on a few old gaming PCs / Ethereum miners. Now I am trying to optimize the space and looking at how to rack mount my ATX machines with GPUs lol… I was able to get a RTX 3070 to fit in a 2U rack mount enclosure but having some heat issues… going to look at 4U cases with better airflow for the RTX 3090 and various RX480s.
I am planning to set up Talos VMs (one per Proxmox host) and bootstrap k8s with Traefik and others. If you’re learning, you might want to start with a batteries-included k8s distro like k3s.
Apartment is too small and my partner is too noise sensitive to get away with a rack. So my localLLM and Jellyfin encoder plus my NAS exists like this this summer. Temps have been very good once the panels came off.
Side question…looks like you got the desk pi tower with the raspberry pi rack. Did you figure out what the holes on the side for with that (not sure what it is) expansion slot(looks like you put your labels over it)? Not sure what’s it for…
Yes! They sell a NVME hat that I assume is for the Pi5 (these are 4’s I had laying around). It also moves the HDMI and USBC to the front.
Its the RS-P11 Expansion board. I can only find it bundled with the normal rack mount kit. I got mine off newegg from a reseller I guess so no expansion boards.