• 1 Post
  • 88 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle

  • Looks really nice and seems like it should be a great foundation for future development. Personally I can’t lose Nextcloud until there are sufficiently featureful and reliable clients for Linux, Windows, Android that synchronize a local copy and help manage the inevitable file deconfliction (Nextcloud Desktop only barely qualifies at this, but it does technically qualify and that represents the minimum viable product for me). I’m not sure a WebDav client alone is enough to satisfy this criteria, but I am not going to pretend I am actually familiar with any WebDav clients so maybe they already exist.


  • You’re on the right track. Like everything else in self-hosting you will learn and develop new strategies and scale things up to an appropriate level as you go and as your homelab grows. I think the key is to start with something immediately achievable, and iterate fast, aiming for continuous improvement.

    My first idea was much like yours, very traditional documentation, with words, in a document. I quickly found the same thing you did, it’s half-baked and insufficient. There’s simply no way to make make it match the actual state of the system perfectly and it is simply inadequate to use English alone to explain what I did because that ends up being too vague to be useful in a technical sense.

    My next realization was that in most cases what I really wanted was to be able to know every single command I had ever run, basically without exception. So I started documenting that instead of focusing on the wording and the explanations. Then I started to feel like I wasn’t capturing every command reliably because I would get distracted trying to figure out a problem and forget to, and it was duplication of effort to copy and paste commands from the console to the document or vice versa. That turned into the idea of collecting bunches of commands together into a script, that I could potentially just run, which would at least reduce the risk of gaps and missing steps. Then I could put the commands I wanted to run right into the script, run the script, and then save it for posterity, knowing I’d accurately captured both the commands I ran and the changes I made to get it working by keeping it in version control.

    But upon attempting to do so, I found that just a bunch of long lists of commands on their own isn’t terribly useful so I started to group all the lists up, attempting to find commonalities by things like server or service, and then starting organize them better into scripts for different roles and intents that I could apply to any server or service, and over time this started to develop into quite a library of scripts. As I was doing this organizing I realized that as long as I made sure the script was functionally idempotent (doesn’t change behaviors or duplicate work when run repeatedly, it’s an important concept) I can guarantee that all my commands are properly documented and also that they have all been run – and if they haven’t, or I’m not sure, I can just run the script again as it’s supposed to always be safe to re-run no matter what state the system is in. So I started moving more and more to this strategy, until I realized that if I just organized this well enough, and made the scripts run automatically when they are changed or updated, I could not only improve my guarantees of having all these commands reliably run, but also quickly run them on many different servers and services all at once without even having to think about it.

    There are some downsides of course, this leaves the potential of bugs in the scripts that make it not idempotent or not safe to re-run, and the only thing I can do is try to make sure they don’t happen, and if they do, identify and fix these bugs when they happen. The next step is probably to have some kind of testing process and environment (preferably automated) but now I’m really getting into the weeds. But at least I don’t really have any concerns that my system is undocumented anymore. I can quickly reference almost anything it’s doing or how it’s set up. That said, one other risk is that the system of scripts and automation becomes so complex that they start being too complex to quickly untangle, and at that point I’ll need better documentation for them. And ultimately you get into a circle of how do you validate the things your scripts are doing are actually working and doing what you expect them to do and that nothing is being missed, and usually you run back into the same ideas that doomed your documentation from the start, consistency and accuracy.

    It also opens an attack vector, where somebody gaining access to these scripts not only gains all the most detailed knowledge of how your system is configured but also the potential to inject commands into those scripts and run them anywhere, so you have to make sure to treat these scripts and systems like the crown jewels they are. If they are compromised, you are in serious trouble.

    By now I have of course realized (and you all probably have too) that I have independently re-invented infrastructure-as-code. There are tools and systems (ansible and terraform come to mind) to help you do this, and at some point I may decide to take advantage of them but personally I’m not there yet. Maybe soon. If you want to skip the intermediate steps I did, you might even be able to skip directly to that approach. But personally I think there is value in the process, it helps defining your needs and building your understanding that there really isn’t anything magical going on behind the scenes and that may help prevent these tools from turning into a black box which isn’t actually going to help you understand your system.

    Do I have a perfect system? Of course not. In a lot of ways it’s probably horrific and I’m sure there are more experienced professionals out there cringing or perhaps already furiously warming up their keyboards. But I learned a lot, understand a lot more than I did when I started, and you can too. Maybe you’ll follow the same path I did, maybe you won’t. But you’ll get there.


  • Nextcloud is just really slow. It is what it is, I don’t use it for any things that are huge, numerous, or need speed. For that I use SyncThing or something even more specialized depending on what exactly I’m trying to do.

    Nextcloud is just my easy and convenient little dropbox, and I treat it like it’s an oldschool free dropbox with limited space that’s going to nag me to upgrade if I put too much stuff in it. It won’t nag me to upgrade, but it will get slow. So I just don’t stress it out. So I only use it to store little convenience things that I want easy access to on all my machines without any fuss. For documents and “home directory” and syncing my calendars and stuff like that it’s great and serves the purpose.

    I haven’t used Seafile. Features sound good, minus the AI buzzword soup, but it looks a little too corporate-enterprisey for me, with minimal commitment to open source and no actual link to anything open source on their website, I don’t doubt that it exists, somewhere, but that raises red flags for potential future (if not in-progress) enshittification to me. After eventually finding their github repo (with no help from them) I finally found a link to build instructions and… it’s a broken link. They don’t seem to actually be looking for contributions or they’re just going through the motions. Open source “community” is clearly not the target audience for their “community edition”, not really.

    I’ll stick to SyncThing.


  • According to the protocol they share (ActivityPub) communities and hashtags are essentially the same thing, they’re a grouping containing many posts. Typing out a hashtag is how you tell Mastodon to add your post to that “hashtag group” (and you can add your post to multiple hashtags). In Lemmy, the community you post in IS the group (and you can cross-post it to multiple communities). The result is the same. They’re the same thing, just different ways of connecting your posts into them, and displayed in very different ways depending on which part of the Fediverse you’re using.


  • Sounds like you’re doing fine to me. The stakes are indeed higher, but that is because what you’re doing is important.

    As the Bene Gesserit teaches: I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration. I will face my fear.

    Make your best effort at security and backups, use your fears to inform a sober assessment of the risks and pitfalls, and ask for help when you need to, but don’t let it stop you from accomplishing what you want to. The self-hosting must flow.


  • Not sure if you’re being sarcastic, but I want to emphasize that whether you mean it that way or not, it’s true. Each person helping and participating makes the work a little easier and success a little closer. A movement requires leaders and builders, certainly, and those people are often doing a lot of heavy lifting. But it also simply requires members, and numbers, and people just showing up. Your support, simply just being here, means more than you might know.


  • It is a perfectly valid approach, and there are also many other perfectly valid approaches. “Better” requires a definition of what you want to be better. If there’s something that’s making you uncomfortable about the process, let us know what concern or issue you’re seeing with it and maybe we can guide you to a better way for you. But there’s nothing wrong with the way they’re doing it. Others may have different preferences (including you, YOU might have different preferences!) but they’re just preferences. It’s not right or wrong, even if some people argue that it is, they’re always going to have some preferences embedded in that judgement. There’s always more than one way to do it. That’s the joy of it, really, and sometimes you’ll have to experiment yourself to find out what ways YOU like the best, that make sense to you, that are comfortable for you, or that do things the way you want to do them.

    It’s your own self-hosting setup, you get to make the choices. Sometimes the number of choices can be intimidating and lead to analysis paralysis but the only way out of that is to realize that there really is no way of finding the “best” until you’ve tried many different ways and figured out the “best” yourself. That’s why the only real advice I can give you is to just go through the tutorial you’ve found and do it the way they do it for now. You can change later, as you learn more, when not if you decide you want to do something differently. Because you will. We all do. It’s part of the process.



  • It’s better than closed source, for sure. But I’m curious, Is the NordVPN app actually conceivably useful for anything other than the NordVPN service? Or is this simply the uni-directional kind of open-source where their software gives nothing useful back to the community and they are just hoping for the part where the community identifies and fixes their bugs for them.

    I suppose we’ll have to wait and see if someone will be able to hack it to add other providers, it would be neat if I could use it to manage my own self-hosted VPN endpoints too.




  • Horrible idea. You’ll likely end up syncing a mess of unnecessary, incompatible and conflicting binary build files onto different platforms, you’ll end up with internal file conflicts that are impossible to properly resolve and will destroy your repo, especially if you’re still using git on top of it. Don’t do this. Git has its own synchronization mechanisms for a reason, they are extremely mature and specifically designed for maximum efficiency, safety and correctness for the task at hand, which is managing source code. Millions of people use git for source code every day. It is a solved problem.

    Syncthing is literally the WRONG tool for this job. It is a great tool for many situations, but you are using it as a hammer when what you need is a saw.



  • cecilkorik@lemmy.catoSelfhosted@lemmy.worldemergency remote access
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    3 months ago

    Redundancy. I have two independent firewalls, each separately routing traffic out through two totally independent multi-homed network connections (one cable, one DSL, please god somebody give me fiber someday) that both firewalls have access to. For awhile was thinking of replacing the DSL with starlink until Elon turned out to be such a pile of nazi garbage, so for now DSL remains the backup link.

    To make things as transparent as possible, the firewalls manage their IPs with CARP. Obviously there’s no way to have a single public IP that ports itself magically from one ISP to another, but on the LAN side it works great and on the WAN side it at least smooths out a lot of possible failure scenarios. Some useful discussions of this setup are here.


  • You’re absolutely incorrect about IRC. Would you like to learn? Open IRC federation is basically never used anymore and the few networks that exist are very stable (if not completely calcified), but it is a core feature of the design, and in the old days, massive interconnected networks of IRC servers like EFnet and Undernet spanned the globe, there were even some servers that allowed open federation (EFnet is actually named for it – eris-free-net referring to the last server “eris” that supported free federation), and at some points Netsplits were a frustratingly daily occurrence. Like with any federation, abuse is the reason we can’t really have nice things anymore, but IRC absolutely supports federation. Not very well from a modern standpoint since it didn’t really keep up with the abuse arms race, but when it was first conceived it was way ahead of its time.





  • I’ve always felt like this is an area with a huge gap. I’ve got my own fragile, cobbled-together bullshit that works for me, but it’s far from ideal or reliable if I’m being honest. I do love Ansible’s general idea of relying on standard, always-ish available protocols like ssh as a universal connection method, and I think it could work well as the bulletproof lower layer when you want to use direct control over the CLI tools and configuration files, like what git provides for anything requiring version control, but ansible needs a slick management interface like github/forgejo provides on top of git, to fill in the higher level UI for when you need a wider scope to get an overview of what’s going on or to make general configuration changes without needing to get your hands dirty. Ideally it would look a lot like Proxmox itself does, just, not specific to Proxmox. Like if I want to add my Steam Deck, and I’ve got ssh enabled on it and it’s not asleep, it should be able to ansible its way in there somehow to at least get whatever basic details it can. Maybe that’s only basic system information at first, but from there I could work on customizing it. That’s what I would consider the ideal, for me at least.