Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

  • HiTekRedNek@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 hour ago

    In my own experience, certain things should always be on their own dedicated machines.

    My primary router/firewall is on bare metal for this very reason.

    I do not want to worry about my home network being completely unusable by the rest of my family because I decided to tweak something on the server.

    I could quite easily run OpnSense in a VM, and I do that, too. I run proxmox, and have OpnSense installed and configured to at least provide connectivity for most devices. (Long story short: I have several subnets in my home network, but my VM OpnSense setup does not, as I only had one extra interface on that equipment, so only devices on the primary network would work)

    And tbh, that only exists because I did have a router die, and installed OpnSense into my proxmox server temporarily while awaiting new-to-me equipment.

    I didn’t see a point in removing it. So it’s there, just not automatically started.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      19 minutes ago

      Same here. In particular I like small cheap hardware to act as appliances, and have several raspberry pi.

      My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier. It is actually running containers but i don’t have to deal with that. It also needs to be always available so i use efficient “right sized” hardware and it works regardless whether im futzing with my “lab”

      • Damage@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 minutes ago

        My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier.

        If you’re talking about backups and updates for addons and core, that works on VMs as well.

  • zod000@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    4 hours ago

    Why would I want add overheard and complexity to my system when I don’t need to? I can totally see legitimate use cases for docker, and work for purposes I use VMs constantly. I just don’t see a benefit to doing so at home.

  • splendoruranium@infosec.pub
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 hours ago

    Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

    If it aint broke, don’t fix it 🤷

  • medem@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 hours ago

    The fact that I bought all my machines used (and mostly on sale), and that not one of them is general purpose, id est, I bought each piece of hardware with a (more or less) concrete idea of what would be its use case. For example, my machine acting as a file server is way bigger and faster than my desktop, and I have a 20-year-old machine with very modest specs whose only purpose is being a dumb client for all the bigger servers. I develop programs in one machine and surf the internet and watch videos on the other. I have no use case for VMs besides the Logical Domains I setup in one of my SPARC hosts.

  • Frezik@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 hours ago

    My file server is also the container/VM host. It does NAS duties while containers/VMs do the other services.

    OPNsense is its own box because I prefer to separate it for security reasons.

    Pihole is on its own RPi because that was easier to setup. I might move that functionality to the AdGuard plugin on OPNsense.

    • HiTekRedNek@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      My reasons for keeping OpnSense on bare metal mirror yours. But additionally I don’t want my network to take a crap because my proxmox box goes down.

      I constantly am tweaking that machine…

  • fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    13 hours ago

    Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.

    The only constant is change.

  • FreedomAdvocate@lemmy.net.au
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    9 hours ago

    Containerisation is all the rage, but in reality it’s not needed at all for all but a tiny number of self hosters. If a native program option exists, it’s generally just easier and more performant to use that.

    Docker and the like shine when you’re frequently deploying and destroying. If you’re doing that with your home server you’re doing it very wrong.

    I like docker, I use it on my server, but I am more and more switching back to native apps. There’s just zero advantage to running most things in docker.

  • sepi@piefed.social
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    3
    ·
    18 hours ago

    “What is stopping you from” <- this is a loaded question.

    We’ve been hosting stuff long before docker existed. Docker isn’t necessary. It is helpful sometimes, and even useful in some cases, but it is not a requirement.

    I had no problems with dependencies, config, etc because I am familiar with just running stuff on servers across multiple OSs. I am used to the workflow. I am also used to docker and k8s, mind you - I’ve even worked at a company that made k8s controllers + operators, etc. I believe in the right tool for the right job, where “right” varies on a case-by-case basis.

    tl;dr docker is not an absolute necessity and your phrasing makes it seem like it’s the only way of self‐hosting you are comfy with. People are and have been comfy with a ton of other things for a long time.

    • kiol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      18 hours ago

      Question is totally on purpose, so that you’ll fill in what it means to you. The intention is to get responses from people who are not using containers, that is all. Thank you for responding!

  • missfrizzle@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    15 hours ago

    pff, you call using an operating system bare metal? I run my apps as unikernels on a grid of Elbrus chips I bought off a dockworker in Kamchatka.

    and even that’s overkill. I prefer synthesizing my web apps into VHDL and running them directly on FPGAs.

    until my ASIC shuttle arrives from Taipei, naturally, then I bond them directly onto Ethernet sockets.

    /uj not really but that’d be sick as hell.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    95
    arrow-down
    4
    ·
    22 hours ago

    Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.

    Yes, I’ll die on this hill.

    • sylver_dragon@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      21 hours ago

      But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!

      In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        20 hours ago

        kubernetes

        Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.

        Those terms do mean something, but they’re a lot simpler than execs claim they are.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    12 hours ago

    There’s one thing I’m hosting on bare metal, a WebDAV server. I’m running it on the host because it uses PAM for authentication, and that doesn’t work in a container.

  • Bogusmcfakester@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    10 hours ago

    I’ve not cracked the docker nut yet. I don’t get how I backup my containers and their data. I would also need to transfer my Plex database into its container while switching from windows to Linux, I love Linux but haven’t figured out these two things yet

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      5 hours ago

      All your docker data can be saved to a mapped local disk, then backup is the same as it ever is. Throw borg or something on it and you’re gold.

      Look into docker compose and volumes to get an idea of where to start.

    • boiledham@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 hours ago

      You would leave your plex config and db files on the disk and then map them into the docker container via the volume parameter (-v parameter if you are running command line and not docker-compose). Same goes for any other docker container where you want to persist data on the drive.

    • purplemonkeymad@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      12 hours ago

      An easy option is to add the data folders for the container you are using as a volume mapped to a local folder. Then the container will just put the files there and you can backup the folder. Restore is just put the files back there, then make sure you set the same volume mapping so the container already sees them.

      You can also use the same method to access the db directory for the migration. Typically for databases you want to make sure the container is stopped before doing anything with those files.

    • hperrin@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      12 hours ago

      Anything you want to back up (data directories, media directories, db data) you would use a bind mount for to a directory on the host. Then you can back them up just like everything else on the host.

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    19 hours ago

    I’ve been self-hosting since the '90s. I used to have an NT 3.51 server in my house. I had a dial in BBS that worked because of an extensive collection of .bat files that would echo AT commands to my COM ports to reset the modems between calls. I remember when we had to compile the slackware kernel from source to get peripherals to work.

    But in this last year I took the time to seriously learn docker/podman, and now I’m never going back to running stuff directly on the host OS.

    I love it because I can deploy instantly… Oftentimes in a single command line. Docker compose allows for quickly nuking and rebuilding, oftentimes saving your entire config to one or two files.

    And if you need to slap in a traefik, or a postgres, or some other service into your group of containers, now it can be done in seconds completely abstracted from any kind of local dependencies. Even more useful, if you need to move them from one VPS to another, or upgrade/downgrade core hardware, it’s now a process that takes minutes. Absolutely beautiful.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      19 hours ago

      Hey, you made my post for me though I’ve been using docker for a few years now. Never, looking, back.