

No, I used an unprivileged container and I set the permissions on the NFS server to accommodate that.


No, I used an unprivileged container and I set the permissions on the NFS server to accommodate that.


I use it like I might use unbound or dnsmasq, but I’d think of it more like bind. It’s can be used as a recursive or authoritative resolver. It supports all kinds of protocols (DOT, DOH, DNSSEC, etc). Handles zone transfers easily. It’s pretty slick. Definitely worth a look


If you’d like some separation, one option is to create a VM on TrueNAS for PBS that connects to an NFS export where all the data would be stored.
What I did in this scenario is an LXC container running PBS, which uses a bindmount for storage. That bindmount is populated via an NFS export from my NAS, mounted on the PVE host using autofs so that if it disconnects, it will reconnect as soon as it can.


Technetium is a recursive DNS resolver with a nice web UI. If you’re familiar with PiHole or AdGuard Home, you can think of it in that genre, but much more full-featured.


If you haven’t already, check out the Awesome Open Source page’s Booking and Scheduling section.
That metadata is written into the photo by the camera, so Immich may not be able to accommodate this easily. Not sure about Canon specifically, but my Nikon cameras have a memory bank for manual focus lenses. Might be worth checking through your menus.


The two pieces of software have very different topologies.
In very broad strokes: Something like FunkWhale uses a server-client model. To get to it, you connect to it remotely and you need some way to get there. By contrast Syncthing behaves as a mesh of nodes. Each node connects directly to the other nodes and the syncthing project folks host relays that help introduce the nodes to one another and penetrate NAT.
No, you may not need a paid domain to use your self-hosted FunkWhale server (I haven’t dabbled with that service in particular). There are a few options.
These all assume that you have a public IP address on your router and not one that’s being NAT-ed by your ISP.
Again, these are very broad strokes, but hopefully it helps point your in a direction for some research.


There’s definitely nothing magic about ports 443 and 80. The risk is always that the underlying service will provide a vulnerability through which attackers could find a way. Any port presents an opportunity for attack; the security of the service is the is what makes it safe or not.
I’d argue that long tested services like ssh, absent misconfiguration, are at least as safe as most reverse proxies. That doesn’t mean to say that people won’t try to break in via port 22. They sure will—they try on web ports too.


I’m not sure if this what you’re after, but it sounded to me that you were describing monitoring. Might be worth your checking out librenms or zabbix or checkmk. Those would give you a good overview of the health of your stuff and keep track of what’s where.


It’s not exactly a single tool, but torsocks kind of enables doing what you’re describing. The syntax would be something like torsocks curl $url


/etc/network/interfaces file and the config file for the VMs (from /etc/pve/qemu-server)?Hope some of that helps


Not sure if this is the kind of thing you’re after, but I think learning a little about the very fundamental pieces of these systems really helps to understand the mechanisms at work.
One place that was really useful to me was years ago, the Security Now podcast did a series called “How the Internet Works” ( I think). Steve Gibson went over all the principles layer by layer and it helped my understanding a ton. This was many years ago, so the rest of each episode is probably filled with really old security news, but the main bits are as relevant as ever.
I’m not familiar with Zurg, but the WebDAV connection makes me recall: doesn’t LXC require that the FUSE kernel module be loaded in order to use WebDAV?
I’ve also seen it recommended that WebDAV be setup on the host and then the mount points bind mounted into the container. Not sure if any of that helps, but maybe it’ll lead you somewhere.
That’s a great tip. I’d completely forgot you can use telnet for that. Thanks!
Thanks for the response. I really should just dive in, but I’ve got this nagging fear that I’m going to forget about some DNS record that will bork my entire mail service. It good to hear about some working instances that people are happy with.
Tainted in that the kernel and ZFS have different licenses. Not a functional impairment. I have no way to check to check a system not using ZFS. For my use case, Debian plus ZFS are PVE’s principal features.
I have synapse server running in docker on a VPS and it’s been pretty reliable. At my office I use it as sort of a self-hosted Slack replacement. For our use case, I don’t have federation enabled, so no experience on that front. It’s a small office and everyone here uses either Element or FuzzyChat on desktop and mobile. It runs behind an nginx reverse proxy and I’ve got SSO set up with Authentik and that’s worked very well. Happy to share some configs if that would be useful.
Have you by any chance documented your PMG set up? I’m also a very happy Mailcow user and spinning up PMG is something I’ve been meaning to tackle for years so I can implement archiving with mailpiler, but I’ve never really wrapped my head around how everything fits together.
Ceph isn’t installed by default (at least it hasn’t been any time I’ve set up PVE) and there’s no need to use ZFS if you don’t want to. It’s available, but you can go right ahead and install the system on LVM instead.
Sure thing—
autofsis a pretty cool utility and it works with SMB as well.If the storage isn’t present for PBS, the backup would fail. There are files inside the directory that PBS will notice are missing.
Mounting the NFS export in the PVE host is the simplest way to get shared storage into an LXC container. You have to fight
apparmorto mount NFS or SMB inside the container directly.