Can it Resolve DOOM? Game Engine in 2,000 DNS Records
12 points by freddyb
12 points by freddyb
I know what DNS lookups do in theory. The part I missed in the article is how the records are uploaded to the DNS servers.
Like do you need a registered domain for each 2k of data? The author mentions free key-value storage but there’s something Im missing here.
You can have an arbitrary number of names in a DNS zone. If you have registered example.org you can create anything.example.org for free. The standard ways to alter a zone are to edit the zone file or use DNS UPDATE requests. In this case they used an astonishingly slow proprietary API to create the records:
The upload took about 15 minutes using the CloudFlare API.
Loading code as data should be somewhat simpel with WASM, but alas...no DNS web APIs. Would need a WASM runtime, which makes all of this a bit less fun.
Each TXT record can hold about 2,000 characters of text.
A TXT record can contain arbitrary binary data, but its framing with extra length bytes is rather annoying. I like to use a custom RRtype to hold arbitrary data instead.
If you aren’t limiting yourself to UDP, then the maximum DNS message size is 65535 bytes. Allowing for overheads you can comfortably fit 65000 bytes in a record. (There are some minor advantages to keeping record sizes below the 16387 compression pointer limit.) Otherwise it’s best to keep the response within the MTU which is about 1000 bytes per record, to avoid the truncation + TCP retry overhead.
It resolves all ~2,000 DNS queries in 10 to 20 seconds
I wonder how much concurrency they got. 100 qps is fairly slow, tho it’s probably wise not to thrash someone else’s recursive server at 10000 qps :-)
If you aren’t limiting yourself to UDP then you can pipeline bulk queries down a TCP connection: assemble the queries in a buffer, one write() to send them all, then collect the responses at leisure. (Assuming your server is friendly to bulk queries over TCP.) Or for a stunt like this, just AXFR the zone :-)