

Yeah, I’ll probably switch eventually I’m just trying to talk myself out of it because I don’t have the time to learn right now
Yeah, I’ll probably switch eventually I’m just trying to talk myself out of it because I don’t have the time to learn right now
I have a desktop, laptop, and a few VMs and servery things. Dotfile manager (yadm, which is a git wrapper) to sync personal settings, everything else I just do manually. The system-level configs are either different enough that standardizing them isn’t very helpful, or no more complicated than installing packages and activating services.
I like the idea of nixos, but I feel like it makes a bunch of daily sacrifices in order to optimize a task I do once every few years? I hardly ever get a new computer, but I install/uninstall/update/tweak packages on my system all the time. With a dotfile manager and snapshots, I get most of the benefit without any of the drawbacks.
The desktop environment is all the stuff like the taskbar, the settings menus, the application launcher, the login screen, that kind of thing. It’s the system level user interface.
You choose which one by which distro you download. Linux mint uses cinnamon, Ubuntu and fedora use gnome. There are “flavors” of Ubuntu and fedora that use KDE. That’s why I suggested ventoy: you can download a few different ones and boot into them without making a new thumb drive.
If you don’t feel like bothering with any of that, just use Linux mint. It’s good.
Yeah, when someone is interested in switching I always advise them to sort out their apps first. Many Linux applications also run on windows, the reverse is rarely true.
Yeah, that’s my experience. The backend is an environment you control completely and has well-defined inputs and outputs specifically designed to be handled by machines. Front end code changes on a whim, runs who the hell knows where, and has to look good doing it.
It’s pretty easy to avoid all of these, mostly by using ===. Null being an object is annoying and is one of the reasons ‘typeof’ is useless, but there are other ways to accomplish the same thing.
JavaScript has a lot of foot guns, but it’s also used by literally everyone so there is a lot of tooling and practice to help you avoid them.
That’s a lot of the reason you buy it, but RHEL is a paid product that you buy copies of.
https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux/how-to-buy#online
You haven’t heard of red hat? Or Ubuntu pro?
NAS at the parents’ house. Restic nightly job, with some plumbing scripts to automate it sensibly.
Shout out to nushell for building an entire shell around this idea!
Have you considered karakeep (formerly hoarder)? It does all of this really well - drop it a URL and it saves a copy. Has lists & tagging (can be done by AI if you want), IOS & android apps as well as browser extensions that make saving stuff super easy.
Broadly similar from a quick glance: https://www.amazon.pl/s?k=m-disc+blu+ray
My options look like this:
https://allegro.pl/kategoria/nosniki-blu-ray-257291?m-disc=tak
Exchange rate is 3.76 PLN to 1 USD, which is actually the best I’ve seen in years
I only looked how zfs tracks checksums because of your suggestion! Hashing 2TB will take a minute, would be nice to avoid.
Nushell is neat, I’m using it as my login shell. Good for this kind of data-wrangling but also a pre-1.0 moving target.
Tailscale deserves it, bitcoin absolutely does not
Where I live (not the US) I’m seeing closer to $240 per TB for M-disc. My whole archive is just a bit over 2TB, though I’m also including exported jpgs in case I can’t get a working copy of darktable that can render my edits. It’s set to save xmp sidecars on edit so I don’t bother with backing up the database.
I mostly wanted a tool to divide up the images into disk-sized chunks, and to automatically track changes to existing files, such as sidecar edits or new photos. I’m now seeing I can do both of those and still get files directly on the disk, so that’s what I’ll be doing.
I’d be careful with using SSDs for long term, offline storage. I hear they lose data if not powered for a long time. IMO metadata is small enough to just save a new copy when it changes
I’ve been thinking through how I’d write this. With so many files it’s probably worth using sqlite, and then I can match them up by joining on the hash. Deletions and new files can be found with different join conditions. I found a tool called ‘hashdeep’ that can checksum everything, though for incremental runs I’ll probably skip hashing if the size, times, and filename haven’t changed. I’m thinking nushell for the plumbing? It runs everywhere, though they have breaking changes frequently. Maybe rust?
ZFS checksums are done at the block level, and after compression and encryption. I don’t think they’re meant for this purpose.
I mean if it’s worked without modification for 6 years….