• 0 Posts
  • 67 Comments
Joined 1 year ago
cake
Cake day: December 27th, 2023

help-circle
  • for the 15gb limit it would be sufficient to just get a VM with enough space (in a datacenter or at home, maybe a rapsberry pi) and run an imap , an mta and something to fetch the mails from google so that they are archived and dont fill in the limited space. i think if i were you, i would begin with just that cz that is the annoying thing and it is always possible to change the setup as wished once it is under your control.

    i personally would not want to use mailcow but dovecot, postfix and fetchmail directly. fetchmail gets the mails from google and places it into dovecots imap storage while postfix would be used to send mails through google to the outside world using your google credentials. then you’ld have google as the external service to begin with and your server to actually host the emails and configure the phones to send emails through it or directly through google but just get the emails from it and save sent mails there. later you could add another nongoogly service so that fetchmail gets these emails too and just extend the setup.

    if you have that, you can send/receive emails when you are at home.

    but before downloading (moving) the first mails from the google storage to there i would ensure that an (incremental) backup is already running well and automatically just in case of disk failures.

    But it was insecure in that you can easily go find my IP address and my real address. I don’t want that, don’t really mind if someone knows it, but I don’t want to be spearphished.

    i have pretty good experience with giving every contact a separate email alias under my domain to communicate with me. my email aliases usually are like <contactshortname>-<randomnumber>@mydomain.tld

    that is for a newsletter from somecoolpage.com it would look like coolpage-61514@mydomain.tld

    it is near to impossible to guess that random number so i get nearly no emails from other than my real contacts cz only they know a valid address. that alias is only used for this one thing, a contact, a shop even a friend (or group of friends). mails go all into the same inbox but when i receive spam or phishing on it, i 1. know who has leaked my data and 2. i can change the alias to a new number, delete the old alias and thus stop any future spam on that address. this way i have no extra spam filters but also near to no spam.

    However your ip address can be found in any email you send in the received headers. is that what you want to prevent, or just the public ip when running an internet facing mailserver with mx records pointing to it ? with ip changes beeing a thing i guess you tried to run the mailserver behind your home internet connection, nonstatic ips are bad for email, you could get a ipv6 tunnel from hurricane electric (still free?) then have static ipv6 addresses, but google afair does not allow you to send them emails via ipv6 and thus i blocked them so they cannot send me emails via ipv6 too, so i think communicating to google victims might be a problem due to google lacking behind current tech. so your idea to use a third party service fits perfectly if you dont want to run your own public mailserver. do you have a vpn to your home network to use the homeserver from remote?


  • thanks for your opinion.

    i already have my own mailservers running for roughly two decades now so copy-paste is not what i am looking for.

    i ordered that email book and mastering dnssec from him now as i am a bit curious about some topics within the email book and want to dive into dnssec now cz i also host dns for my domains and improvement is always good ;) last time i started with dnssec i got distracted and that was it.





  • smb@lemmy.mltoSelfhosted@lemmy.worldEmail with own domain service but local?
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    3 days ago

    i guess step by step was asked for on purpose, but i also don’t know on what level ;-)

    @werefreeatlast@lemmy.world :

    i’ld suggest as step by step to start small and increase to what you want:

    1. register a new account for testing on a freemail service like gmail.com gmx.net , hotmail.com or another. as its just the first step, it does not matter if its google or not, but that you can send and receive emails through it via common protocols like smtp and pop3 and that it is ‘not’ your account you handle important mails with as data losses could occur during experimenting.
    2. make sure your freemailer account is configured to use smtp and pop3 for sending/receiving email by a mailclient rather than only through their webpage. some freemailers also need you to have a different password for using the mail client than for logging into their portal (which is good). validate with your mailclient that sending/receiving works with those credentials, and note protocols, port numbers, login mechs maybe discovered by your mailclient.
    3. setup your mailserver (mailcow if you like) and connect it to your freemailers account maybe first for sending via smtp (send one to your real mail account) then for receiving maybe via pop3, testing it by sending a mail from your real mail account to the freemailer one.
    4. search for a cheap (you are still experimenting, right?) email service where you can use your own domain with, set it up, they likely also have faqs how to do the dns of your domain right to use their MX server. according to https://www.techradar.com/news/best-email-provider NeoMail (https://neo.space/) seems a good choice. i’ld suggest that you get a separate domain for experimenting from a different company (i use name.com) so you are then more aware of how everything works together and also can change parts of it more easily later if needs change. domains are usually cheap like some bucks per year and domain services usually also provide simple ways to define some records like in this case the MX and spf records you need/want for emails to be send to that email service.
    5. once you have setup dns records and your mail providers account for sending/receiving mails to/from, try to connect your holy email cow to it and experiment with it. also sending from/to your real mail account, and let it run for a while, look into topics like dmarc and dkim, use spf, dmarc and spf online check tools to see if that setup works as you like. based on your experience you might have ideas then how to go on with it.

    spf,dkim and dmarc are good to prevent malicious parties from sending emails in your name to third parties. a mail server works good without that but it is a good practice and might prevent your domain (not your ip) from beeing blacklisted because of spam that you haven’t sent but seems to originate from your domain and cannot be distinguished from your genuine emails only due to the lack of missing spf, dkim and dmarc records. spf and dmarc are dns only settings while dkim are crypto keys you create for signing outgoing emails and the public parts of them are published as dns records again so everyone can check that the signature really comes from your domain. i dont know if or how mailcow supports dkim, but it should be at least possible ;-)



  • hm, sounds like literally any regular webhosting service that also offers email (like every such service i know of) to me, then maybe used together with imap (or pop, if you wish), and if you want to connect servers with it to send mails, then “smarthost” or “sattelite system” should be the configuration you are looking for for your own MTA. to get received emails from that service most common is to use pop3 (still common because seemingly every service offers it for compatibility) but other protocols would be faster like immediate recieve using notify within imap, and there are other options too, but those depends on what that service offers like maybe sending your mails once received by them to your own server via smtp or by other protocols depending on what they implemented. i think there is no “twist” with that and -what i understand of what you want - is a quite common thing.

    i for myself don’t want 3rd parties to be able to directly read my emails so i run my own mail server as tiny rented VMs from providers while my real emailserver is my homeserver that uses these VMs as “smarthost” and also pulls emails from there immediately. my mailclients are configured to connect to those VMs butbthat connection is relayed through VPN to my homeserver. thus i think my setup is a bit like what you want but i host everything by myself and i don’t use mailcow but it looks like i use the same software mailcow uses too. i guess you are mainly bound to what mailcow offers when limiting yourself to it ;-)





  • Democracy is mathematically impossible.

    if democracy was not possible, how does it come that the greek did democracy and it is said they were once overrun in a war because of beeing democratic? if something was a cause for a turn of a war, i pretty much believe it to really exist, no matter what some kind of half baked formulars “predicted” once.

    if democracy existed and your math says thats not possible, i’ld guess your math might simply be ‘slightly’ wrong about it or was created with (un-)intentional biases in mind ;-)

    just to note:

    in the history of human predictions based on thought through and wordly/mathmatically described rules, the most common thing afterwards was, that those rules and also their predictions were just fundamentally wrong and biased.


  • smb@lemmy.mltoMemes@lemmy.mlVoting for the lesser evil is still evil
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    1 month ago

    a system where you get served only two options to vote for but are held responsible for the outcome instead of those who limited the available options in the first place?

    eh yes, you are right, this is stupid.

    as a completely unrelated sidenote:

    “winner takes it all” is the actual opposite of democracy, no matter how the voting was done, and this fact can already be read 1:1 within those 4 simple words 😉






  • you’re welcome.

    what i’ld suggest… a general rule that i like to always follow is to use a test system for everything new. but that does not need to be a full separate system every time.

    lets say you have your mailbox and want to try getting new mails from it using fetchmail. first you can use uidl mechanisms to only fefch every mail once and besides that leave them all on the server, but i like it a bit more secure: create a second email adress/account at your mail providers service only for testing. thus you can do whatever you like to to test the mechanisms only without even touching your real inbox (maybe even fill it up with large emails and look how the system reacts, i once had an email account with a cheap provider that deadlocked the inboxes when full…). then when everything is as you want it, switch the account and password (or create another config file for fetchmail) and your’re done. every change (not only fetchmail things) could go tested this way before going live with the changes. filtering could be done with procmail for example, but when the mda that is called by procmail somehow exits with success when the email really isn’t delivered, then the email might get lost forever depending on the settings of course. so fiddling with new stuff always carries the risk of not fiddling correctly ;-)

    have fun !


  • Its possible to tell your mta (like postfix) to use another mta for all mails, or only some domains etc, so using a third party to play the internet facing service then getting the mails by fetchmail, storing them in a dovecot server is easy. on the sending part you could use your standard email client (i.e. thunderbird on pc or k9-mail on smartphone) to send it to your postfix instance that also sits on the server hosting your dovecot service. the mta there takes the mail and delivers it by rules which could just be using the mta of your freemailer using username/password of your account for all outgoing emails. i am doing this but the “external” mail system are my servers as well, i just don’t want emails to stay too long on VMs in the datacenter where i have no access to the physical disks in case something goes wrong.

    a raspberry pi is sufficient for such a aetup (i am using a pi4 currently but for emails only i’ld say a 3 or older would do too), adding a disk via usb makes storage huge and cheap then, i use two usb ssd’s in a raid1 for storage… that server could be only accessible through vpn if you whish, depending on your skills and needs (i mainly use ssl client certificates that are supported by k9mail and thunderbird so it fits seamless to be connected through a haproxy that authenticates these before proxying the plain connection to the pi) clients like thunderbird can offline-store all emails (configure download-or-not per imap folder) making searches easy and quick while my k9 client can search locally or on the server if needed.

    maybe adjust maximum mail size of your own mta to exactly match (or slightly less) that of the freemailer you use to prevent surprises of big but later then unsent emails.

    its possible to have a nextcloud instance on that same pi that acts as an email web mailer just in case of (i really dont need it, but i’ve set this up anyway). nextcloud is also great for syncing/backup files pictures, contacts notes todo lists and calendar of your phone (where i use davx5 opentasks and foldersync for). there are other webmailers available but installing /using nextcloud is not a too bad idea either ;-)

    i suggest also setting up some automatic offsite backup with snapshots of that pi then to cover emails and the setup and its configs ;-)


  • smb@lemmy.mltoLinux@lemmy.mlA word about systemd
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    one example of a program that did multiple things is sfdisk, it used to make the kernel reload the new partition table but that was not its main job, only changing them. the extra functionality moved to blockdev which is nearer to doing such as it also triggers flushing buffers and i think setting read/write status. i am fully ok with that change as it removes code from a program that doesn’t need it to another that already does similar things so that other partitioning programs like gdisk fdisk or parted could go the same way so that maintainers of the reread-partition-table things can concentrate on one solution at one place (in userspace) instead of opening issues at an unknown number of projects that also alter partitioning. the “do one thing” paradigma is good for developers who maintain the code and i pretty much appreciate their work. if you are up to only want one-day-flies that either die or take huge amounts of resources only for keeping them alive (image of a mayfly in an emergency room and a heart-lung machine attached while chirurgs rushing around trying to enlenghten its life a few seconds more) then you are good with monolithic tools that could hardly be maintained and suck allday as no one wants to fix any bugs or cannot without creating new ones due to the tightened dependency hell it has internally.

    the point is not a lack of examples doing wrong but where one wants to be heading towards.