What’s going on on your servers? Smooth operations or putting out fires?

I got some tinkering time recently and migrated most of my Docker services to Komodo/Forgejo. Already merged some Renovate PRs to update my containers which feels really smooth.

Have to restructure some of the remaining services before migrating them and after that I want to automate config backup for my OpnSense and TrueNAS machines.

  • Gobbel2000@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    4 hours ago

    Did an oopsie. I never realized that after upgrading the OS, my certbot renew service to renew the HTTPS certificate always failed. So now I had an expired certificate. At least it was an easy fix by reinstalling certbot.

  • haulyard@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    10 hours ago

    Run most of my containers on a synology 1621+. Immich, paperless, grafana-based monitoring stack, etc. Upgraded memory from 8GB to 32GB and it’s a night and day difference. Enough that I’m probably not going to move forward with adding nvme storage for a cache. Wish I did it sooner.

    Added donetick to the collection recently and the kids have really latched onto the points system and got them more engaged with helping around the house. Note that if you self host, the password reset function doesn’t work. You have to update the hash in the databased directly. Not a big deal but it really shouldn’t require that level of effort.

    I use scrypted to pipe Unifi POE cameras into HomeKit. Not really a fire, but I’m having issues with notification timing be longer than it used to be.

    And lastly, a very very different turn of events for me. I’m not a developer, but I recently did an experiment to see what AI could do in helping me create a web app that could be used to scan all the physical books we have. A catalog of sorts for us to be able to look up what we have by genre, bookshelf location, etc. 100% vibe coding. Took a few hours of back and forth but we have something working. Not sure I’ll ever let the code see the light of day outside my network, but it did help me learn a tiny bit about coding.

  • Alexander@sopuli.xyz
    link
    fedilink
    arrow-up
    7
    ·
    14 hours ago

    Certbot complained about invalid files structure… after 18 successful updates over the years. Seriously, those guys should put a bit more care into updating stuff. Of course, the fix was trivial.

  • dotslashme@infosec.pub
    link
    fedilink
    English
    arrow-up
    7
    ·
    15 hours ago

    Sunday is upgrade day, meaning I will update the OS on my servers and then update all my helm charts.

  • Pencilnoob@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    15 hours ago

    I’ve been hosting Home Assistant now for about nine months very smoothly. So far the only outage has been when I moved the server into a different room last week because it was cluttering up my office. Feels good to have it tucked away in a closet. I’ve got my thermostat, and all my upstairs lights running through it. I keep trying to get these moisture sensors to work for my plants but they just keep losing signal or something and rtl433 just stops seeing those but meanwhile detects every other damn hardware on the whole block. I’ve got 500+ entities that aren’t my three sensors lol. Once I figure that out my Plant Cards dashboard will work again which is super cool.

    Other than that, I’ve been hosting Jellyfin for home media and digging it. Much less success with the 'arr suite, it works and stays up but it really struggles to automatically find the weird-ass old niche media I’m looking for. I generally have to handle that part manually, but at least they are good as wishlist of what I’m looking for so I can use that just to keep track.

    All in all these two servers (home assistant OS on an old laptop, jellyfin and arr on my former Ubuntu desktop) have been great and just work really without any issues.

    Oh last week when I moved them into the closet, they got assigned new IP addresses, so I figured out how to lock them so my clients and bookmarks still work.

  • ccryx [he/him]@discuss.tchncs.de
    link
    fedilink
    arrow-up
    3
    ·
    14 hours ago

    I’ve finally pinned down my backup automaton:

    • All my services are in podman containers/pods managed by systemd.
    • All services are PartOf= a custom containers.target.
    • All data is stored on btrfs sub volumes
    • I created a systemd service that Conflicts=containers.target for creating read only snapshots of the relevant subvolumes.
    • That service Wants=borgmatic.service wich creates a borg backup of the snapshots on a removable drive. It also starts containers.target on success or failure since the containers are not required to be stopped anymore.
    • After borg backup is done, the repository gets rclone synced to an S3 compatible storage.
    • This happens daily, though I might put the sync to S3 on a different schedule, depending how much bandwidth subsequent syncs will consume.

    What I’m not super happy about is the starting of containers.target via the systemd unit’s OnSuccess= mechanism but I couldn’t find an elegant way of stopping the target while the snapshots were being created and then restarting the target through the other dependency mechanisms.

    I also realize it’s a bit fragile, since subsequent backup steps are started even if previous steps fail. But in the worst case that should just lead to either no data being written (if the mount is missing) or backing up the same data twice (not a problem due to deduplication).

  • confusedpuppy@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    ·
    13 hours ago

    I bought a second USB SSD which has now become the second backup SSD. I ended up skipping my switch to Podman because I got invested in writing another script.

    I’m not interested in having my backup drives automatically decrypt and mount at startup but those were the only guides I could find. I still want to manually type my password and wanted an easier way handle that.

    I ended up writing this script which turned the 4 lines of code I was using before into a 400+ line single file script.

    Once I pair it with my rsync script, I’ll be able to remotely, automatically and interactively decrypt, mount, update my backup, unmount and re-encrypt my USB SSD. The script also has tests to make sure the mount directory is ready for use and not sending anything with rsync if the encrypted SSD is not mounted. I just finishes writing the script and now I have to integrate it into my systems.

    I was originally going to add the second backup to my local-only network Pi server but I think I’ll add it to my web facing Pi server so I am able to access it remotely. I would feel a lot more comfortable knowing that data on there isn’t easily accessible because it’s not auto-mounting.

    Other than that, things are boring and boring is good.

  • Sunoc@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    15 hours ago

    Sunday

    What’s good, Kiribati ? 🇰🇮

    I made some effort recently to try and set a K3S cluster with Flux but my bunch of RPi are just not powerful enough for that (never managed to deploy Longhorn). I’m moving to a more reasonable Docker Swarm with Ansible.

    • tofuOP
      link
      fedilink
      arrow-up
      4
      ·
      15 hours ago

      Uh, yeah, that’s right, I’m in Kiribati and absolutely didn’t confuse the weekdays 🫣

      Bit disappointing it doesn’t seem to run on your Raspis, I thought that’s basically what k3s is for? Is Longhorn the problem or what component exactly? Figuring if you are going to switch to Swarm, the payload isn’t the issue

      • Sunoc@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        13 hours ago

        I noticed bc I live very east and in general I’m off by one day in the other direction x)

        I had two problems with my gitops setup:

        • Longhorn would fail to deploy, some container creation timeout and crashed on repeat for hours for unknown reason.
        • I never managed to have a working ingress neither using the built in Treafik, nor adding it afterwards.

        Probably skill issue in both case, but the trial and error was slow and annoying, so I figured I would just upgrade my Ansible setup instead.