• 1 Post
  • 20 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle


  • Not exactly the same, but I have an air quality sensor I use to turn the HVAC fan mode on/off to filter. Also a CO2 sensor. Both wired to the RPi I run homeassistant on. The HVAC is controlled via T6 pro Z-wave now, but I started out with a Zooz Zen15 switch to just turn the whole thing on/off.

    The CO2 sensor has been pretty stable for 4(?) years - it has an internal recalibration routine that resets its baseline based on the past week’s data. My readings range from 400-ish with the windows open & fans to 1200+ cooking with gas in the sealed house. Averages around 800 with the AC or 500 with the furnace (which exhausts combustion gasses). The aq sensor has been replaced once after 3-4(?) years. It reads exactly what purpleair says is outside with the windows open, drops to 0-2 µg/m3 with the filters running, spikes to 300+ cooking.



  • I switched from an I3-530, nominal TDP 73W, to an N-100, nominal TDP 7W, and power from the wall didn’t change at all. Even the i3 ran around 0.1 CPU load, except when transcoding, and I’m left with the impression that most of the power goes into HDDs, RAM, maybe fans, and PS losses. My sense is that the best way to decrease homelab power use is to minimize the number of devices. Start with your seyrver at 60W, add a WAP at 10-15W, maybe a switch at 10-15W… Not because of the CPUs, necessarily, but because every CPU every CPU comes with systems to keep the CPU going, keep the power regulated, etc.



  • My ISP seems to use just normal DHCP for assigning addresses and honors re-use requests. The only times my IP addresses have changed has been I’ve changed the MAC or UUID that connects. I’ve been off-line for a week, come back, and been given the same address. Both IPv4 and v6.

    If one really wants their home systems to be publicly accessible, it’s easy enough to get a cheap vanity domain and point it at whatever address. rDNS won’t work, which would probably interfere with email, but most services don’t really need it. It’s a bit more complicated to detect when your IP changes and script a DNS update, but certainly do-able, if (like OP) one is hell bent on avoiding any off-site hardware.





  • It really depends on what your data is and how hard it would be to recreate. I keep a spare HD in a $40/year bank box & rotate it every 3 months. Most of the content is media - pictures, movies, music. Financial records would be annoying to recreate, but if there’s a big enough disaster to force me to go to the off-site backups, I think that’ll be the least of my troubles. Some data logging has a replica database on a VPS.

    My upload speed is terrible, so I don’t want to put a media library in the cloud. If I did any important daily content creation, I’d probably keep that mirrored offsite with rsync, but I feel like the spirit of an offsite backup is offline and asynchronous, so things like ransomware don’t destroy your backups, too.





  • tburkhol@lemmy.worldtoSelfhosted@lemmy.worldISO Selfhost
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 months ago

    Wonder if there’s an opportunity there. Some way to archive one’s self-hosted, public-facing content, either as a static VM or, like archive.org, just the static content of URLs. I’m imagining a service one’s heirs could contract to crawl the site, save it all somewhere, and take care of permanent maintenance, renewing domains, etc. Ought to be cheap enough to maintain the content; presumably low traffic in most cases. Set up an endowment-type fee structure to pay for perpetual domain reg.


  • tburkhol@lemmy.worldtoSelfhosted@lemmy.worldISO Selfhost
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 months ago

    At least my descendants will own all my comments and posts.

    If you self-host, how much of that content disappear when your descendants shut down your instance?

    I used to host a bunch of academic data, but when I stopped working, there was no institutional support. Turned off the server and it all went away (still Wayback Machine archives). I mean, I don’t really care whether my social media presence outlives me, the experience just made me aware that personal pet projects are pretty sensitive to that person.


  • Back in the day, I set up a little cluster to run compute jobs. Configured some spare boxes to netboot off the head-node, figured out PBS (dunno what the trendy scheduler is these days), etc. Worked well enough for my use case - a bunch of individually light simulations with a wide array of starting conditions - and I didn’t even have to have HDs for every system.

    These days, with some smart switches, you could probably work up a system to power nodes on/off based on the scheduler demand.



  • You can configure HA to use an external database, so you could (presumably) config two instances to use the same DB. Not sure how much conflict that would cause for entities that are only attached to one of those instances, but it seems like both should have the same access to state data and history. Could probably even set one instance up with read-only DB access to limit data conflicts, although I imagine HA will complain about that.

    Even with an external database, HA still uses its internal DB for some things, so I don’t think you’d ever get identically mirrored instances.


  • tburkhol@lemmy.worldtoSelfhosted@lemmy.worldStarting to self host
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    4 months ago

    If you’re already running Pihole, I’d look at other things to do with the Pi.

    https://www.adafruit.com/ has a bunch of sensors you can plug into the Pi, python libraries to make them work, and pretty good documentation/examples to get started. If you know a little python, it’s pretty easy to set up a simple web server just to poll those sensors and report their current values. Only slightly more complicated to set up cron jobs to log data to a database and a web page to make graphs.

    It’s pretty straightforward to put https://www.home-assistant.io/ in a docker on a Pi. If you have your own local sensors, it will keep track of them, but it can also track data from external sources, like weather & air quality. There a bunch of inexpensive smart plugs ($20-ish) that will let you turn stuff on/off on a schedule or in response to sensor data.

    IMO, Pi isn’t great for transport-intensive services like radarr or jellyfin, but, with a Usb HD/SSD might be an option.