• 0 Posts
  • 3 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle

  • Technically, it might be faster, but that’s not usually the reason. Email servers generally have to do a lot of work to confirm email messages are not spam. That work usually takes significantly longer than any potential DNS savings. In fact, that spam checking is probably the reason you see the secondary domains used.

    When the main domain used for many purposes (like servers, users, printers, vendor communications, accounting communications, and so forth) It leaves a lot of room for misuse. Many pre-ransomware viruses would just send out thousands of emails iper hour. The mass communicating server could also reduce the domain reputation. There are just so many ways to tarnish the reputation of your email server or your email domain.

    Many spam analysis systems group the subdomains and domain together. The subdomains contribute to the domain score and the domain score contributes to the subdomain score. To send a lot of emails successfully, you need both your servers and domains to have a very strong and very good reputation. Any marks on that reputation might prevent emails from being received by users. When large numbers of emails need to be controlled, it can be hard to get everyone in the organization to adhere to email rules (especially when the the problems aren’t users, but viruses/hackers) and easy to just register a new domain, more strictly controlled domain.

    Some of the recent changes in email policies/tech might change the game, but old habits die hard. Separate domains can still generally be more successfully delivered, have potential security benefits, and can often work around IT or policy restrictions. They might phase out, but they might not. The benefit usually outweighs the slight disadvantage that 99% of people won’t see.

    tl;dr

    Better controlled email reputation.


  • Time isn’t the only factor for adoption. Between the adoption of IPv4 and IPv6, the networking stack shifted away from network companies like Novell to the OSes like Windows, which delayed IPv6 support until Vista.

    When IPv4 was adopted, the networking industry was a competitive space. When IPv6 came around, it was becoming stagnant, much like Internet Explorer. It wasn’t until Windows Vista that IPv6 became an option, Windows 7 for professionals to consider it, and another few years later for it to actually deployable in a secure manner (and that’s still questionable).

    Most IT support and developers can even play with IPv6 during the early 2000s because our operating systems and network stacks didn’t support it. Meanwhile, there was a boom of Internet connected devices that only supported IPv4. There are a few other things that affected adoption, but it really was a pretty bad time for IPv6 migration. It’s a little better now, but “better” still isn’t very good.