

But what will I do without Copilot in Notepad


But what will I do without Copilot in Notepad


just doing the FAFO approach.
Please consider the following:
How did early humans find out which food sources were safe to eat, and which were not?


the thing is that cleaner production methods benefit big industry because they get to produce higher-quality products which makes them more profit due to higher price.
Sure, maybe in the long run. But that will cost money now and the quarterly earnings report is due.


So all those problems are fixed but somehow new ones keep popping up?
Yes. Welcome to reality.
Maybe it wasnt real change but actually just band aids and the root cause stayed unadressed through all those years.
The entire history of human civilization is an example of building the airplane while you’re flying it, without a plan for either the airplane construction or the flight path.
Somehow the system you want to maintain and support keeps creating these problems.
There are a lot of problems, and yes the solutions to old problems often create new problems, because reality is not a video game where collecting a dozen McGuffins ends the quest and you get a reward and then never worry about that issue again.
Sometimes the airplane crashes: https://fallofcivilizationspodcast.com/
The options are:


You could have spent a century testing CFCs in a lab environment. The problem they caused with the ozone layer would still not have become apparent until CFCs were used in the real world where they could interact with the ozone layer.
There is no amount of testing and preparation that can account for every possible outcome or interaction.
Asbestos is another good example. It is naturally occurring and quite common and was used as a building material for millennia. It is lightweight but strong, flexible in thin sheets, and fireproof. It’s an extremely useful and versatile material, and abundantly available.
It wasn’t until the 1900s that medical testing linked asbestos fibers to several health risks. It basically required the entire history of human development for our medical technology to identify the danger. No amount of testing, analysis or review done prior would have mattered.


The hole in the ozone layer is now repairing due to international regulation of CFCs
Acid rain is no longer a problem because of regulation of sulfur dioxide emissions
Leaded gasoline has now been banned in every country
Asbestos exposure is rare now due to regulatory controls. It’s bad that it took so long to get done.
Government regulation is effective in protecting people from health risks.
Collective action (e.g. through voting) is effective in establishing such regulation.
If you spread the lie that voting in favor of such policies (and politicians who support them) is a useless waste of time, you are spreading industry propaganda.
Effective, large-scale change is IN FACT at the polls.
It is nowhere else.


Yes, actually.
Do you remember the hole in the ozone layer? It’s self-repairing now because the chemicals that were damaging it were internationally banned - by government regulation.
Do you remember the acid rain scare? It’s not a problem now because of regulatory control of sulfur dioxide emissions.
Do you know why gasoline is unleaded?
Do you know why asbestos is banned in building materials?
Government regulation actively improves human health and wellbeing, and has prevented several outright disasters from progressing.
Real change does, in fact, come from voting for politicians that support effective environmental policies. It is industry propaganda that wants you to believe that regulation is ineffective.


First and most important:
In the context of long-term data storage
ALL DRIVES ARE CONSUMABLES
I can’t emphasize this enough. If you only skim the rest of my post, re-read the above line and accept it as fundamental truth. “Long-term” means 1+ years, by the way.
It does not matter what type of drive you buy, how much you spend on it, who manufactured it, etc. The drive will fail at some point, probably when you’re least prepared for it. You need to plan around that. You need to plan for the drive being completely useless and the data on it unrecoverable post-failure. Wasting time and money to acquire the fanciest most bulletproof drives on the market is a pointless resource pit, and has more to do with dick-measuring contests between data-hoarders.
Knife geeks buy $500+ patterned steel chef’s knives with ebony handles and finely ground edges and bla bla bla. Professional kitchens buy the basic Victorinox with the plastic handle. Why? Because they actually use it, not mount it on a wall to look pretty.
The knife is a consumable, not an heirloom. So are your storage drives. We call them “spinning rust” for a reason.
The solution to drive failure is redundancy. Period.
Unfortunately, this reality runs counter to the desire to maximize available storage. Do not follow the path of desire, that way lies data loss and outer darkness. Fault-tolerant is your watchword. Component failure is unpredictable, no matter how much money you spend. A random manufacturing defect will ruin your day when you least expect it.
A minimum safe layout is to have 2 live copies of data (one active, one mirror), hot standby for 1 copy (immediate swap-in when the active or mirror fails), and cold standby on the shelf to replace the hot standby when it enters service.
Note that this does not describe a specific number of disks, but copies of data. The minimum to implement this is 4 disks of identical storage capacity (2 live, 1 hot standby, 1 on the shelf) and a server with slots for 3 disks. If your storage needs expand beyond the capacity of 1 disk, then you need to scale up by the same ratio. A disk is indivisible - having two copies of the same data on a disk does not give you any redundancy value. (I won’t get into striping and mucking about with weird RAID choices in this post because it’s too long already, but basically it’s not worth it - the KISS principle applies, especially in small configurations)
This means you only get to use 25% of the storage capacity that you buy. Them’s the breaks. Anything less and you’re not taking your data longevity seriously, you might as well just get a consumer-grade external drive and call it a day.
Buy 4 disks, it doesn’t matter what they are or how much they cost (though if you’re buying used make sure you get a SMART report from the seller and you understand what it means) but keep in mind that your storage capacity is just 1 of the disks. And buy a server that can keep 3 of them online and automatically swap in the standby when one of the disks fails. Spend more money on the server than the disks, it will last longer.
Remember, long-term is a question of when, not if.
One’a these days, Alice…


Really nice overview


I’ve got a USB SSD that I can’t use, because I need to “unlock” it in a windows device first. I can’t even re-partition it in linux.
Is this Bitlocker FDE? Have you tried using Dislocker?
If that doesn’t work, I recommend building a gparted live USB. Once you’re up and the SSD is visible, create a new partition table

Complete this step with no other changes. This shouldn’t care if the partitions on the disk or encrypted, it will reset the partition table which will make the disk appear blank, as if it was never formatted. You should then be able to create any new partitions you want in the available space.
! THIS IS DESTRUCTIVE !
But if you couldn’t access the encrypted partition then the data was effectively destroyed already.
Or pipe GUI output into another GUI function.
Or log.txt
Remember, RAID (or RAID-adjacent) is not a backup.
This. So much this. OP please listen to and understand this.
Even with full mirroring in RAID 1, it’s not a backup. Using the second drive as an independent backup would be so much better than RAID.


You SHOULD NOT do software RAID with hard drives in separate external USB enclosures.
There will be absolutely no practical benefit to this setup, and it will just create risk of transcription errors between the mirrored drives due to any kind of problems with the USB connections, plus traffic overhead as the drives constantly update their mirroring. You will kill your USB controller, and/or the IO boards in the enclosures. It will be needlessly slow and not very fault-tolerant.
If this hardware setup is really your best option, what you should do is use 1 of the drives as the active primary for the server, and push backups to the other drive (with a properly configured backup application, not RAID mirroring). That way each drive is fully independent from the other, and the backup drive is not dependent on anything else. This will give you the best possible redundancy with this hardware.
The Alt key is Alt. Why would it need another label?


give us your data give us your data give us your data


A modern OS running with low RAM (e.g. an RPi with 2G) is going to fill the RAM pretty quickly just in normal operation, so a larger swap space will allow it to run more efficiently as it regularly moves things in and out of swap. You still want to have some overhead to allow for storing the live RAM for hibernation, which with a small amount of RAM is likely to be near 100%. Therefore, running with 3x RAM for swap space is recommended.
it only needs to be at least the size of RAM
Yes, technically it only needs to be the size of the RAM, but no matter how much RAM you have some of the swap space will be used at any given time for the swap file during system operarion. If you only have exactly as much swap space as RAM, there won’t be enough available swap space to store the entire live RAM for hibernation.
The size of the swap file and the size of the live RAM image at any point is unpredictable, therefore 1.5x RAM is the lowest recommended value that is probably safe for hibernation, assuming the swap file is not being used heavily enough to be 50% of the RAM. If you can’t provide at least that much disk space for swap, you should disable hibernation.


This is the best simple guideline: https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/10/html/managing_storage_devices/getting-started-with-swap#recommended-system-swap-space
Basically, if you want your system to be able to hibernate then you need enough swap space to sustain both the active swap file and a full image of the live system RAM (hibernate = suspend-to-disk, and uses the swap space). The swap file could be as large as the RAM, so a safe value is 2x the RAM. If you don’t want to dedicate that much disk space to swap, the safe option is to disable hibernation but note that suspend-to-disk is safer for system recovery in the event of power failure.
If you’ve ever had a Linux system go into hibernate and fail to awake, lack of swap space was probably the reason.
In Red Hat’s chart where they recommend 1.5x RAM for 8-64 GiB, basically you’re hoping that your system is never completely using all of the RAM. If you do cap out the RAM such that the swap file plus the in-use memory is greater than 1.5x RAM, and the system goes into hibernate, it will not recover because there isn’t enough free swap space to store the in-use memory. You have to make a judgment call when you set up your system about how you’re going to use it - whether you expect to be using 100% of the RAM at any point, whether you’ll remember to close some running applications to free up memory every time you leave the system idle long enough to go into hibernate, whether other users will be using the system (if they’re logged in then they are partially using the RAM and the swap), etc.
Deciding how much swap space you need is a risk management decision based on your tolerance for data loss, application stability, and whether or not you need hibernation.


Anyone know if massgrave still works?
It’s traditional: