Which brings me to a rather big gripe about other resolvers not respecting TTL, 70% of https://www.whatsmydns.net/ reported it could not query A names, while 30% were like "Yeah here you go" from their cache.
I fixed the glue and got everything back up, I need to write an automated script to check every day if my IP has changed and alert me to update my glue record at my registar.
I use a lot of mix and match scripts to maintain other aspects like challenges for DNS e.g. Letsencrypt, I'll use their hooks to update my DNS, resign it (DNSSEC), complete the challenge, then cleanup. My more personal domains I don't use DNSSEC so I just skip right ahead.
I quite enjoy handling my own DNS records, BIND has been really good to me and I love their `view "external"` and `view "internal"` scopes so I can give the world my authoritative records, and internally serve my intranet and other services like pihole (which sits behind BIND)
Break out a piece of mail, connect the dots, and you see their eyes light up with comprehension. "Oh, so that's how my computer gets to google.com; it's just like how my postman knows where to deliver my mail!" Then a critical component is demystified, and they want to learn more.
Running a DNS server is honestly such a good activity for folks in general.
Multiple comments in this thread refer to TLS certificates
Why is payment to and/or permission from a third party "necessary" to encrypt data in transit over the a computer network, whether it's a LAN or an internet. What does this phoney "requirement" achieve
For example, why is it "necessary" to purchase a domain name registration from an "ICANN-approved" registrar in order to use a TLS certificate
Is obtaining a domain name registration from an "ICANN-approved" registrar proof of identity for purposes of "authentication". What purpose does _purchasing_ a registration serve. For example, similar to "free" Let's Encrypt certificates, domain names could also be "free"
Whatever "authentication" ICANN and its "approved" registries and registrars are doing, e.g., none, is it possible someone else could do it better using a different approach
This comment is not asking for answers to these questions; the question are rhetorical. Of course the questions may trigger defensive replies; everyone is entitled to an opinion and opinions may differ
You don't need ICANN for TLS or encryption. You can create your own CA and sign your own certs. In fact, this is typically how it's done to authenticate for example clients of a web server using certs (you install the cert in the browser).
You can use your CA to sign a cert for your ICANN-registered domain and install it in the web server; there are no internet police who are gonna stop you. Web browsers will complain about this "self-signed cert", unless you install your CA's public key in your browser. (Security-wise, you probably shouldn't go around installing random people's CA certs in your browser. You need to trust them not to issue certs for e.g. google.com. On the other hand you need to trust China and Morocco not to do that already, so maybe you're willing to accept that risk.)
> Is obtaining a domain name registration from an "ICANN-approved" registrar proof of identity for purposes of "authentication".
People make the mistake of conflating an FQDN or address with identity all the time. People point at resources in domains which don't exist (this includes DNS resources), and people register those abandoned domains and then click "forgot password" and take over whatever account was tied to that email address in that domain.
I don't know that ICANN requires any proof. There are CAs which have enhanced identity verification, this applies to the certs they issue for both servers and clients / people.
> What purpose does _purchasing_ a registration serve.
Makes you a member of ICANN's club. There are pseudo-TLDs which are registered in ICANN's tree where you can register a (sub)domain, without interacting with ICANN at all.
Rhetorically speaking, of course.
Better yet, set up ssh to the proxmox server and ask claude code to set it up for you, works like a charm! claude can call ssh and dig and verify that your dns chains work, it can test your firewall and ports (basically running pen tests against yourself..), it can sort out almost any issue (I had intel wifi card and had firmware locks on broadcasting in 5GHZ spectrum in AP Mode - mediatek doesn't - claude helped try to override firmware in kernel but intel firmware won't budge). It can setup automatic nightly updates that are safe, it can help you setup recovery/backup plans (which runs before updates), it can automate certain proxmox tasks (periodic snapshotting of vm's) and best of all, it can document the entire infrastructure comprehensively each time I make changes to it.
For me, that means doing routing, DNS, VPN, and associated stuff with one box running OpenWRT. It works. It's ridiculously stable. And rather than having a number of things that could break the network when they die, I only have 1 thing that can do so.
That box currently happens to be a Raspberry Pi 4 that uses VLANs as Ethernet port expanders, but it is also stable AF with a [shock! horror!] USB NIC. I picked that direction years ago mostly because I have a strong affinity towards avoiding critical moving parts (like cooling fans) in infrastructure.
But those details don't matter. Any single box running OpenWRT, OPNsense, pfSense, Debian, FreeBSD, or whatever, can behave more-or-less similarly.
[1]: Yeah, so about that. If the real-world MTBF for a system that relies upon 1 box is 10 years, then the MTBF for a system relying on 2 boxes to both keep working is only 5 years. Less is more.
Last few days I've been migrating everything to luadns format, stored in github and then I have github actions triggering a script to convert it to octodns and apply it.
I could have just used either, but I like the luadns format but didn't want to be stuck using them as a provider
Of course I am the only user. But YAGNI works for me.
stub resolver (client) -> OPTIONAL forwarding resolver (server) -> recursing / caching resolver (server) -> authoritative server. "Personal DNS server" doesn't disambiguate whether your objective is recursive or authoritative... or both (there is dogma about not using the same server for both auth and recursion, if you're not running your resource as a public benefit you can mostly ignore it). If it's recursive I don't know why you'd run it in the cloud and not on-prem.
You'll find that you can restrict clients based on IP address, and you can configure what interfaces / addresses the server listens on. The traditional auth / nonrepudiation mechanism is TSIG, a shared secret. Traditionally utilized for zone transfers, but it can be utilized for any DNS request.
The traditional mechanism for encryption has been tunnels (VPNs) but now we have DoH (web-based DNS requests) and DoT (literally putting nginx in front of the server as a TCP connection terminator if it's not built in). These technologies are intended to protect traffic between the client and the recursing resolver. Encryption between recursing resolvers and auths is a work in progress. DNSSEC will protect the integrity of DNS traffic between recursives and auths. I don't know how big your personal network is, for privacy / anonymity of the herd you might want to forward your local recursing resolver's traffic to a cloud-based server and co-mingle it with some additional traffic; check the servers' documentation to see if you can protect that forwarder -> recursive traffic with DoT or you're not gaining any additional privacy; it's extra credit and mostly voodoo if you don't know what you're doing. I don't bother, I let my on prem recursives reach out directly to the auths. Once the DNS traffic leaves my ISP it's all going in different directions, or at least it should be notwithstanding the pervasive centralization of what passes for the federated / distributed internet at present.
You could also add whitelisting on your dns server to known IPs, or at least ranges to limit exposure, add rate limiting / detection of patterns you wouldn’t exhibit etc.
You could rotate your dns endpoint address every x minutes on some known algorithm implemented client and server side.
But in the end it’s mostly security through obscurity, unless you go via your own tailnet or similar
If you want DNS that is only for you, edit your hosts file.
All I can think of is that it adds obscurity, in that it makes the address of the Minecraft server more difficult to discover or guess (and thus keeps everything a bit more private/griefing-resistant while still letting kids play the game together).
And AXFR zone transfers are one way that DNS addresses leak. (AXFR is a feature, not a bug.)
As a potential solution:
You can set up DNS that resolves the magic hardcoded Minecraft server name (whatever that is) to the address of your choosing, and that has AXFR disabled. In this way, nobody will be able to discover the game server's address unless they ask that particular DNS server for the address of that particular name.
It's not airtight (obscurity never is), but it's probably fine. It increases the size of the haystack.
(Or... Lacking VPN, you can whitelist only the networks that the kids use to play from. But in my experience with whitelisting, the juice isn't worth the squeeze in a world of uncontrollably-dynamic IP addresses. All someone wants to do is play the game/access the server/whatever Right Now, but the WAN address has changed so that doesn't work until they get someone's attention and wait for them to make time to update the whitelist. By the time this happens, Right Now is in the past. Whitelisting generally seems antithetical towards getting things done in a casual fashion.)
So we're hosting our own minecraft server and a suitable connector for cross-play - and it's easy to join on tablets, computers and so on because there's a button that allows you to enter an address. But on the switch, Microsoft in its wisdom decided that there'd be no "join random server" button. But there are some official realm servers, and they just happen to host a lobby and the client understands some interface commands sent by the server (1). Some folks in the community devised a great hack - you just host a lobby yourself that presents a list of servers of your choice. But to do that, you need to bend the DNS entries of a few select hostnames that host the "official" lobbies so that they now point to your lobby. Which means you need to run a resolver that is capable of resolving all hostnames, because you need to set it in the switchs networking settings as the primary DNS server.
Now, there are people that run resolvers in the community and that might be one option, but I'm honestly a bit picky about who gets to see what hostnames my kids switch wants to resolve.
Whitelisting networks is impossible - it's residential internet.
The reason I'd be interested in running this behind a VPN is that I don't want to run an open resolver and become part of an amplification attack. (And sadly, the Switch 1 does not have a sufficiently modern DNS stack so that I can just enable DNS cookies and be done with it. The Switch 2 supports it).
Sorry if this sounds complicated. It's just hacks on hacks on hacks. But it works.
(1) judging from the looks and feel, this is actually implemented as a minecraft game interface and the client just treats that as a game server. It even reports the number of players hanging out in the lobby.
Here are a few ideas:
1. Geoblocking. Not ideal, but it can make your resolver public for fewer people.
2. What if your DNS only answers queries for a single domain? Depending on the system, the fallback DNS server may handle other requests?
3. You could always hand out a device that connects to the WLAN. Think a cheap esp32. Only needs to be powered on when doing the resolution. Then you have a bit more freedom: ipv6 RADV + VPN, or try hijacking DNS queries (will not work with client isolation), or set it as resolver (may need manual config on each LAN, impractical).
4. IP whitelist, but ask them to visit a HTTP server from their LAN if it does not work (the switch has a browser, I think), this will give you the IP to allow, you can even password-protect it.
I'd say 2. Is worth a try. 4. Is easy enough to implement, but not entirely frictionless.
On the DNS end, it seems the constraints are shaped like this:
1. Provides custom responses for arbitrary DNS requests, and resolves regular [global] DNS
2. Works with residential internet
3. Uses no open resolvers (because of amplification attacks)
4. Works with standalone [Internet-connected] Nintendo Switch devices
5. Avoids VPN (because #4 -- Switch doesn't grok VPN)
With that set of rules, I think the idea is constrained completely out of existence. One or more of them need to be relaxed in order for it to get off the ground.The most obvious one to relax seems to to be #3, open resolvers. If an open resolver is allowed then the rest of the constraints fit just fine.
DNS amplification can be mitigated well-enough for limited-use things like this Minecraft server in various ways, like implementing per-address rate limiting and denying AXFR completely. These kinds of mitigations can be problematic with popular services, but a handful of Switch devices won't trip over them at all.
Or: VPN could be used. But that will require non-zero hardware for remote players (which can be cheap-ish, but not free), and that hardware will need power, and the software running on that hardware will need configured for each WLAN it is needed to work with. That path is something I wouldn't wish upon a network engineer, much less a kid with a portable game console. It's possible, but it's feels like a complete non-starter.
(3) would be easy to handle if DNS Cookies were sufficiently well supported because they solve reflection attacks and that's the most prominent. Rate limiting also helps.
At the moment I'm at selectively running the DNS server when the kids want to play because we're still at the supervised pre-planned play-session. And I hope that by the time they plan their own sessions, they've all moved on to a Switch 2.
For example, any red-hat based linux distro comes with Firewalld, you could set rules that by default will block all external connections and only allow your kids and their friends IP addresses to connect to your server (and only specifically on port 53). So your DNS server will only receive connections from the whitelisted IPs. Of course the only downside is that if their IP changes, you'll have to troubleshoot and whitelist the new IP, and there is the tiny possibility that they might be behind CGNAT where their IPv4 is shared with another random person, who is looking to exploit DNS servers.
But I'd say that is a pretty good solution, no one will know you are even running a DNS service except for the whitelisted IPs.
What you want to do is -on each LAN that has a Switch that you want to play on your specific Minecraft server- report that the IP for the hostname of the Minecraft server the Switch would ordinarily connect to is the server that you're hosting?
If you're using OpenWRT, it looks like you can add the relevant entries to '/etc/hosts' on the system and dnsmasq will serve up that name data. [0] I'd be a little shocked (but only a little) if something similar were impossible on all non-OpenWRT consumer-grade routers.
My Switch 1 is more than happy to use the DNS server that DHCP tells it to. I assume the Switch 2 is the same way.
[0] <https://openwrt.org/docs/guide-user/base-system/dhcp.dnsmasq>
Unless of course you will invest 5-6 figures worth of US dollars worth of equipment, which by then you can look back and ask yourself, was I better off with Google Cloud DNS, AWS Route 53 and the likes.
The main thing I can think of is DNS amplification attacks, but that's more your DNS server being used as part of a DDoS attack rather than being targeted for one. Also (afaik) resolvers are more common targets for DNS amplification than authoritative.
One must distinguish between application layer attacks HTTP/S and UDP, cloud vendors won’t protect you implicitly for network layer attacks unless you purchased such service from them.
Far cry from needing $1e6 HW ourselves.
[0]: https://www.knot-dns.cz/docs/3.5/singlehtml/index.html#autom...
And when using such turn-key DNSSEC support, I think there's very little risk to enabling it. While other commenters pointing out its marginal utility are correct, turn-key DNSSEC support that Just Works™ de-risks it enough for me that the relatively marginal utility just isn't a concern.
Plus, once you've got DNSSEC enabled, you can at the very least start to enjoy stuff like SSHFP records. DANE may not have any real-world traction, but who knows what the future may bring.
Simplistically you need a DS record at your registrar, then sign your zones before publishing. You can cheat and make the KSK not expire, which saves some aggravation. I've rolled my own by hand for 10 yrs with no dnssec related downtime
[1] DNSSEC Operational Practices https://datatracker.ietf.org/doc/html/rfc6781
That is to say, if you misconfigure it, or try to turn it off, you will have an invalid domain until the TTL runs out, and it's really just not worth the headache unless you have a real use case.
Did DNSSEC for company website, worked with zero maintenance for several years. On a cloud-provided DNS. Would want the same on self-hosted DNS too.
Yes, but with nowadays https/tls usage it's almost irrelevant for normal websites.
If bad actors can create valid tls certs they can solve the dnssec problem.
I think you have it backwards: by not running DNSSEC it can mean bad actors (at least a certain level) can MITM the DNS queries that are used to validate ACME certs.
It is now mandated that public CAs have to verify DNSSEC before issuing a cert:
* https://news.ycombinator.com/item?id=47392510
So if you want to reduce the risk of someone creating a fake cert for one of your properties, you want to protect your DNS responses.
(disclaimer: I contribute a tiny bit to dnsdist.)
gawd just install webmin ffs
As the OP states you can get a registrar to host a domain for you and then you create a subdomain anywhere you fancy and that includes at home. Do get the glue records right and do use dig to work out what is happening.
Now with a domain under your own control, you can use CNAME records in other zones to point at your zones and if you have dynamic DNS support on your zones (RFC 2136) then you can now support ACME ie Lets Encrypt and Zerossl and co.
Sadly certbot doesn't do (or it didn't) CNAME redirects for ACME. However, acme.sh and simple-acme do and both are absolutely rock solid. Both of those projects are used by a lot of people and well trod.
acme.sh is ideal for unix gear and if you follow this blokes method of installation: https://pieterbakker.com/acme-sh-installation-guide-2025/ usefully centralised.
simple-acme is for Windows. It has loads of add on scripts to deal with scenarios. Those scripts seem to be deprecated but work rather well. Quite a lot of magic here that an old school Linux sysadmin is glad of.
PowerDNS auth server supports dynamic DNS and you can filter access by IP and TSIG-KEY, per zone and/or globally.
Join the dots.
[EDIT: Speling, conjunction switch]
https://github.com/ndilieto/uacme
Tiny, simple, reliable. What more can you ask?
It's a chat server but with curl. You can try it here
curl -NT. https://chat.est.im/hackernews
(Note: IPv6 only for the moment)
This makes me so happy. Acme and certbot trying to do this is annoying, Caddy trying to get certs by default is annoying. I ended up on a mix of dehydrated and Apache mod_md but I think I like the look of uACME because dehydrated just feels clunky
acme.sh was too garish for my liking, even as a guy that likes his fair share of shell scripts. And obviously certbot is a non-starter because of snap.
The new setup is using uAcme and nsupdate to do DNS-01 challenges. No more fiddling with any issues in the web server config for a particular virtual host, like some errant rewrite rule that prevents access to .well-known/.
Are you certain? Not at a real machine at the moment so hard for me to dig into the details but CNAMEing the challenge response to another domain is absolutely supported via DNS-01 [0] and certbot is Let's Encrypt's recommended ACME client: [1]
... which is a very common pattern I've seen hundreds (thousands?) of times.The issue you may have run into is that CNAME records are NOT allowed at the zone apex, for RFC 1033 states:
... of course making it impossible to enter NS, SOA, etc. records for the zone root when a CNAME exists there.P.S. doing literally fucking anything on mobile is like pulling teeth encased in concrete. Since this is how the vast majority of the world interfaces with computing I am totally unsurprised that people are claiming 10x speedups with LLMs.
[0] https://letsencrypt.org/docs/challenge-types/
[1] https://letsencrypt.org/docs/client-options/
I use acme.sh which does support it: https://news.ycombinator.com/item?id=47066072
...but they also don't say how to specify the zone to be updated like acme.sh does: https://github.com/acmesh-official/acme.sh/blob/master/dnsap...
So say you want a cert for *.foo.com, and you have:
...I can make certbot talk to the foo.bar.com DNS server, but it tries to add the TXT record for _acme-challenge.foo.com, which that DNS server obviously rejects (and even if it accepted it, that obviously wouldn't work).I'd be happy to hear there's a way to do it that I missed. Also I'm specifically talking about the rfc2136 support, maybe some of the proprietary certbot backends do support this.
EDIT: Here are more references:
https://github.com/certbot/certbot/issues/6566
https://github.com/certbot/certbot/pull/5350
https://github.com/certbot/certbot/pull/6644
Without CNAME redirect I wouldn't be able to automatically renew wildcard ssl for client domains with dns that has no API. Even if they do have an API, doing it this way stops me from needing to deal with two different APIs
This sounds like you are complaining about Ubuntu, not the software you wish to install in Ubuntu.
[0] https://certbot.eff.org/