2025-06-23 02:05:00
When I started blogging, writing was a struggle (though it was a pursuit I loved) - more specifically, finding something to write about. I would ceaselessly scroll on social media and my curated blog rotation, sifting through fragments of sentences, words, and expressions that ignited the faintest flicker of inspiration.
Looking back, I realized I was chasing a grandiose, life-changing epiphany to blog about, a monumental profundity that would change the world. I believed I needed a remarkable life to justify writing about it, to earn the right to speak. Most of my writer's block, I now see, was rooted from that pressure to say something extraordinary.
I began collecting words from the world around me - random snippets from conversations, excerpts from books and songs, a fleeting memory - which I immediately jot on a commonplace for later revisiting. To me, that was the beauty of it - how I start with something as peripheral as a grounded, personal glimpse, and it unfolds, blooming into blogpost that someone can read.
There was something moving about transforming the mundane into meaning, about offering a perspective that felt ordinary, only to realize it carries more weight to you than you expected. These small snippets I captured from my life meant something to me, and that was enough to make me want to share. Over time, I wrote with less resistance and more rhythm. I started showing up, even in days I felt like I had nothing to say. In turn, I became more attentive to my own life. The details I once overlooked became worth writing about.
You don't need a grand life to write. You need presence. Attention. An utter willingness to discern the details of your own life: questions you ask yourself in the quiet moments, conversations that make you ponder, complex feelings you want to untangle. Not everything I post is groundbreaking, but I know it’s honest. That has become enough for me.
2025-06-22 14:26:00
This post has been made as my notes, even though I attempt to explain what I have setup/built and how, I do not owe anyone any explanation. Do NOT expect anything.
My blog is my garden.
So I did another project recently, and finally decided to make some notes (write about it) now. It is a custom DNS server written in Go. DO NOTE it is not 100% complete yet and there will be missing features or good to haves. I do this just for fun, practice and learning. Because, what is computing if not fun?
This program is called dizer
. Do note this is my attempt to surpass the previous such project in Python mentioned here
if you want to learn more about DNS
, please visit here
Feature | dns-go |
---|---|
Basic DNS Resolution | ✅ Done |
Blocklist Support | ✅ Done |
LRU Caching | ✅ Done |
Concurrent Request Processing | ✅ Done |
DNSSEC Record Support | ✅ Done |
DANE Support (TLSA, SVCB) | ✅ Done |
Advanced Blocklist Parsing | ✅ Done |
Full DNS Record Type Support | ✅ Done |
Customizable Cache TTL | ✅ Done |
UDP Buffer Optimization | ✅ Done |
Proper Error Handling | ✅ Done |
IPv6 Support (AAAA Records) | ✅ Done |
Zone Transfer (AXFR) | ❌ Not Done |
DNS over HTTPS (DoH) | ❌ Not Done |
DNS over TLS (DoT) | ❌ Not Done |
Our custom DNS server leverages several standard library packages and one external dependency. The bytes
, context
, crypto/md5
, encoding/binary
, fmt
, io
, log
, net
, net/http
, os
, path/filepath
, regexp
, strings
, sync
, time
, and unicode
packages handle data manipulation, networking, file operations, and concurrency. The external package github.com/alitto/pond
provides a worker pool for concurrent request processing, improving performance on multi-core systems.
package main
import (
"bytes"
"context"
"crypto/md5"
"encoding/binary"
"fmt"
"io"
"log"
"net"
"net/http"
"os"
"path/filepath"
"regexp"
"strings"
"sync"
"time"
"unicode"
"github.com/alitto/pond"
)
I'm using the following blocklist by default , feel free to add in more.
// Blocklist URLs
var BlocklistURLs = []string{
"https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts",
"https://raw.githubusercontent.com/hagezi/dns-blocklists/main/hosts/pro-compressed.txt",
"https://someonewhocares.org/hosts/",
}
Type aliases like Domain
, CacheKey
, TransactionID
, QueryType
, QueryClass
, and TTL
improve readability and type safety. Constants define DNS record types (e.g., TypeA
, TypeAAAA
, TypeNS
), classes (e.g., ClassIN
), response codes (e.g., RCodeNoError
, RCodeNXDomain
), and header flags (e.g., FlagQR
, FlagRD
). Configuration constants set defaults like upstream server (DNSServer
), cache size (CacheMaxSize
), and port (ServerPort
).
// Type aliases for better readability and type safety
type (
Domain = string
CacheKey = string
TransactionID = uint16
QueryType = uint16
QueryClass = uint16
TTL = uint32
)
// DNS record types - comprehensive list including DNSSEC and modern records
const (
TypeA QueryType = 1 // IPv4 address
TypeNS QueryType = 2 // Name server
TypeMD QueryType = 3 // Mail destination (obsolete)
TypeMF QueryType = 4 // Mail forwarder (obsolete)
TypeCNAME QueryType = 5 // Canonical name
TypeSOA QueryType = 6 // Start of authority
TypeMB QueryType = 7 // Mailbox domain name
TypeMG QueryType = 8 // Mail group member
TypeMR QueryType = 9 // Mail rename domain name
TypeNULL QueryType = 10 // Null resource record
TypeWKS QueryType = 11 // Well known service
TypePTR QueryType = 12 // Pointer
TypeHINFO QueryType = 13 // Host information
TypeMINFO QueryType = 14 // Mailbox information
TypeMX QueryType = 15 // Mail exchange
TypeTXT QueryType = 16 // Text strings
TypeRP QueryType = 17 // Responsible person
TypeAFSDB QueryType = 18 // AFS database location
TypeX25 QueryType = 19 // X.25 PSDN address
TypeISDN QueryType = 20 // ISDN address
TypeRT QueryType = 21 // Route through
TypeNSAP QueryType = 22 // NSAP address
TypeNSAPPTR QueryType = 23 // NSAP pointer
TypeSIG QueryType = 24 // Security signature
TypeKEY QueryType = 25 // Security key
TypePX QueryType = 26 // X.400 mail mapping
TypeGPOS QueryType = 27 // Geographical position
TypeAAAA QueryType = 28 // IPv6 address
TypeLOC QueryType = 29 // Location information
TypeNXT QueryType = 30 // Next domain (obsolete)
TypeEID QueryType = 31 // Endpoint identifier
TypeNIMLOC QueryType = 32 // Nimrod locator
TypeSRV QueryType = 33 // Service locator
TypeATMA QueryType = 34 // ATM address
TypeNAPTR QueryType = 35 // Naming authority pointer
TypeKX QueryType = 36 // Key exchanger
TypeCERT QueryType = 37 // Certificate
TypeA6 QueryType = 38 // A6 (obsolete)
TypeDNAME QueryType = 39 // DNAME
TypeSINK QueryType = 40 // SINK
TypeOPT QueryType = 41 // OPT (EDNS)
TypeAPL QueryType = 42 // APL
TypeDS QueryType = 43 // Delegation signer (DNSSEC)
TypeSSHFP QueryType = 44 // SSH Key Fingerprint
TypeIPSECKEY QueryType = 45 // IPSECKEY
TypeRRSIG QueryType = 46 // RRSIG (DNSSEC)
TypeNSEC QueryType = 47 // NSEC (DNSSEC)
TypeDNSKEY QueryType = 48 // DNSKEY (DNSSEC)
TypeDHCID QueryType = 49 // DHCID
TypeNSEC3 QueryType = 50 // NSEC3 (DNSSEC)
TypeNSEC3PARAM QueryType = 51 // NSEC3PARAM (DNSSEC)
TypeTLSA QueryType = 52 // TLSA (DANE)
TypeSMIMEA QueryType = 53 // S/MIME cert association
TypeHIP QueryType = 55 // Host Identity Protocol
TypeNINFO QueryType = 56 // NINFO
TypeRKEY QueryType = 57 // RKEY
TypeTALINK QueryType = 58 // Trust Anchor LINK
TypeCDS QueryType = 59 // Child DS (DNSSEC)
TypeCDNSKEY QueryType = 60 // Child DNSKEY (DNSSEC)
TypeOPENPGPKEY QueryType = 61 // OpenPGP Key
TypeCSYNC QueryType = 62 // Child-to-Parent Synchronization
TypeZONEMD QueryType = 63 // Zone Message Digest
TypeSVCB QueryType = 64 // Service Binding
TypeHTTPS QueryType = 65 // HTTPS Binding
TypeSPF QueryType = 99 // SPF (obsolete, use TXT)
TypeUINFO QueryType = 100 // UINFO
TypeUID QueryType = 101 // UID
TypeGID QueryType = 102 // GID
TypeUNSPEC QueryType = 103 // UNSPEC
TypeNID QueryType = 104 // NID
TypeL32 QueryType = 105 // L32
TypeL64 QueryType = 106 // L64
TypeLP QueryType = 107 // LP
TypeEUI48 QueryType = 108 // EUI48
TypeEUI64 QueryType = 109 // EUI64
TypeTKEY QueryType = 249 // Transaction Key
TypeTSIG QueryType = 250 // Transaction Signature
TypeIXFR QueryType = 251 // Incremental transfer
TypeAXFR QueryType = 252 // Transfer of an entire zone
TypeMAILB QueryType = 253 // Mailbox-related records
TypeMAILA QueryType = 254 // Mail agent RRs (obsolete)
TypeANY QueryType = 255 // All records
TypeURI QueryType = 256 // URI
TypeCAA QueryType = 257 // Certification Authority Authorization
TypeAVC QueryType = 258 // Application Visibility and Control
TypeDOA QueryType = 259 // Digital Object Architecture
TypeAMTRELAY QueryType = 260 // Automatic Multicast Tunneling Relay
)
// DNS classes
const (
ClassIN QueryClass = 1 // Internet
ClassCS QueryClass = 2 // CSNET (obsolete)
ClassCH QueryClass = 3 // CHAOS
ClassHS QueryClass = 4 // Hesiod
ClassANY QueryClass = 255 // Any class
)
// DNS response codes
const (
RCodeNoError uint16 = 0 // No error
RCodeFormErr uint16 = 1 // Format error
RCodeServFail uint16 = 2 // Server failure
RCodeNXDomain uint16 = 3 // Non-existent domain
RCodeNotImpl uint16 = 4 // Not implemented
RCodeRefused uint16 = 5 // Query refused
RCodeYXDomain uint16 = 6 // Name exists when it should not
RCodeYXRRSet uint16 = 7 // RR set exists when it should not
RCodeNXRRSet uint16 = 8 // RR set that should exist does not
RCodeNotAuth uint16 = 9 // Server not authoritative
RCodeNotZone uint16 = 10 // Name not contained in zone
RCodeBadVers uint16 = 16 // Bad OPT version
RCodeBadKey uint16 = 17 // Key not recognized
RCodeBadTime uint16 = 18 // Signature out of time window
RCodeBadMode uint16 = 19 // Bad TKEY mode
RCodeBadName uint16 = 20 // Duplicate key name
RCodeBadAlg uint16 = 21 // Algorithm not supported
RCodeBadTrunc uint16 = 22 // Bad truncation
RCodeBadCookie uint16 = 23 // Bad/missing server cookie
)
// DNS header flags
const (
FlagQR uint16 = 1 << 15 // Query/Response flag
FlagAA uint16 = 1 << 10 // Authoritative Answer
FlagTC uint16 = 1 << 9 // Truncated
FlagRD uint16 = 1 << 8 // Recursion Desired
FlagRA uint16 = 1 << 7 // Recursion Available
FlagZ uint16 = 1 << 6 // Zero
FlagAD uint16 = 1 << 5 // Authentic Data (DNSSEC)
FlagCD uint16 = 1 << 4 // Checking Disabled (DNSSEC)
)
// Configuration constants
const (
DNSServer = "9.9.9.9:53"
CacheMaxSize = 200 * 1024 * 1024
BlocklistCacheTTL = 24 * time.Hour
DefaultDNSTTL = 5 * time.Minute
ServerPort = ":853"
CacheDirName = "blocklist_cache"
MaxConcurrentTasks = 1000
MaxWorkers = 100
UDPBufferSize = 4096 // Increased for EDNS support
RequestTimeout = 5 * time.Second
)
The DNSHeader
struct represents the DNS packet header, containing fields for transaction ID, flags, and counts for questions (QDCount
), answers (ANCount
), name servers (NSCount
), and additional records (ARCount
). It forms the foundation for parsing and building DNS packets.
type DNSHeader struct {
ID uint16
Flags uint16
QDCount uint16
ANCount uint16
NSCount uint16
ARCount uint16
}
The DNSQuestion
struct models a DNS query, storing the domain name, query type (e.g., TypeA
), and query class (e.g., ClassIN
). It’s used to parse incoming queries and construct responses.
type DNSQuestion struct {
Name string
Type QueryType
Class QueryClass
}
The DNSResourceRecord
struct represents a DNS answer, authority, or additional record. It includes the name, type, class, time-to-live (TTL
), data length (RDLength
), and raw data (RData
). This struct supports all DNS record types, including DNSSEC and DANE.
type DNSResourceRecord struct {
Name string
Type QueryType
Class QueryClass
TTL uint32
RDLength uint16
RData []byte
}
The CacheEntry
struct is a generic type for cache entries, storing a value, expiration time, and size. It’s used in the LRU cache to manage DNS responses with TTL-based expiration.
type CacheEntry[T any] struct {
Value T
Expiration time.Time
Size int
}
The LRUCache
struct implements a generic least-recently-used cache with size-based eviction. It uses a mutex for thread safety, tracks entries with a map, maintains order for eviction, and calculates sizes via a provided function. Methods like Set
, Get
, evictOldest
, and removeFromOrder
manage cache operations.
type LRUCache[K comparable, V any] struct {
mu sync.RWMutex
maxSize int
currentSize int
entries map[K]*CacheEntry[V]
order []K
sizeCalc func(K, V) int
}
The BlocklistCache
struct manages a blocklist of domains for filtering. It uses a sync.Map
for O(1) lookups, tracks the last update time, and stores cache files. Regular expressions (domainRegex
, ipPatterns
, commentPatterns
) enhance parsing of blocklist formats.
type BlocklistCache struct {
domains sync.Map
lastUpdate time.Time
mu sync.RWMutex
cacheDir string
domainRegex *regexp.Regexp
ipPatterns []*regexp.Regexp
commentPatterns []*regexp.Regexp
}
The DNSServerStruct
struct is the core of the DNS server, containing an LRU cache for responses, a blocklist cache, a worker pool (pond.WorkerPool
), and a UDP connection. It orchestrates query handling and server operations.
type DNSServerStruct struct {
dnsCache *LRUCache[CacheKey, []byte]
blocklistCache *BlocklistCache
pool *pond.WorkerPool
conn *net.UDPConn
}
The NewLRUCache
function initializes a generic LRU cache with a maximum size and a size calculation function. It sets up the internal map and order slice for tracking entries.
func NewLRUCache[K comparable, V any](maxSize int, sizeCalc func(K, V) int) *LRUCache[K, V] {
return &LRUCache[K, V]{
maxSize: maxSize,
entries: make(map[K]*CacheEntry[V]),
order: make([]K, 0),
sizeCalc: sizeCalc,
}
}
The Set
method adds or updates a cache entry, evicting older entries if the cache exceeds its size limit. It updates the order to mark the entry as recently used and adjusts the current size.
func (c *LRUCache[K, V]) Set(key K, value V, ttl time.Duration) {
c.mu.Lock()
defer c.mu.Unlock()
size := c.sizeCalc(key, value)
// Evict old entries if necessary
for c.currentSize+size > c.maxSize && len(c.order) > 0 {
c.evictOldest()
}
// Remove existing entry if present
if existing, exists := c.entries[key]; exists {
c.currentSize -= existing.Size
c.removeFromOrder(key)
}
// Add new entry
c.entries[key] = &CacheEntry[V]{
Value: value,
Expiration: time.Now().Add(ttl),
Size: size,
}
c.order = append(c.order, key)
c.currentSize += size
}
The Get
method retrieves a cache entry, checking for expiration and updating the order to mark it as recently used. It returns the value and a boolean indicating success.
func (c *LRUCache[K, V]) Get(key K) (V, bool) {
c.mu.RLock()
entry, exists := c.entries[key]
c.mu.RUnlock()
if !exists {
var zero V
return zero, false
}
if time.Now().After(entry.Expiration) {
c.mu.Lock()
delete(c.entries, key)
c.currentSize -= entry.Size
c.removeFromOrder(key)
c.mu.Unlock()
var zero V
return zero, false
}
// Move to end (most recently used)
c.mu.Lock()
c.removeFromOrder(key)
c.order = append(c.order, key)
c.mu.Unlock()
return entry.Value, true
}
The evictOldest
method removes the least recently used entry to free space, while removeFromOrder
removes a key from the order slice during updates or eviction.
func (c *LRUCache[K, V]) evictOldest() {
if len(c.order) == 0 {
return
}
oldest := c.order[0]
if entry, exists := c.entries[oldest]; exists {
delete(c.entries, oldest)
c.currentSize -= entry.Size
}
c.order = c.order[1:]
}
func (c *LRUCache[K, V]) removeFromOrder(key K) {
for i, k := range c.order {
if k == key {
c.order = append(c.order[:i], c.order[i+1:]...)
break
}
}
}
The NewBlocklistCache
function creates a blocklist cache, initializing the cache directory and compiling regex patterns for parsing blocklist formats (e.g., hosts files, AdBlock).
func NewBlocklistCache() *BlocklistCache {
cacheDir := CacheDirName
os.MkdirAll(cacheDir, 0755)
// Compile regex patterns for better parsing
domainRegex := regexp.MustCompile(`^[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(\.[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$`)
ipPatterns := []*regexp.Regexp{
regexp.MustCompile(`^0\.0\.0\.0\s+(.+)$`),
regexp.MustCompile(`^127\.0\.0\.1\s+(.+)$`),
regexp.MustCompile(`^::1?\s+(.+)$`),
regexp.MustCompile(`^::0?\s+(.+)$`),
regexp.MustCompile(`^0::0\s+(.+)$`),
regexp.MustCompile(`^255\.255\.255\.255\s+(.+)$`),
}
commentPatterns := []*regexp.Regexp{
regexp.MustCompile(`^\s*#`),
regexp.MustCompile(`^\s*!`),
regexp.MustCompile(`^\s*//`),
regexp.MustCompile(`^\s*;`),
}
return &BlocklistCache{
cacheDir: cacheDir,
domainRegex: domainRegex,
ipPatterns: ipPatterns,
commentPatterns: commentPatterns,
}
}
The getCachePath
function generates a cache file path using an MD5 hash of the blocklist URL. The isCacheValid
function checks if a cache file is within the TTL (24 hours).
func (bc *BlocklistCache) getCachePath(url string) string {
hash := fmt.Sprintf("%x", md5.Sum([]byte(url)))
return filepath.Join(bc.cacheDir, fmt.Sprintf("blocklist_%s.txt", hash))
}
func (bc *BlocklistCache) isCacheValid(cachePath string) bool {
info, err := os.Stat(cachePath)
if err != nil {
return false
}
return time.Since(info.ModTime()) < BlocklistCacheTTL
}
The downloadAndCacheBlocklist
function downloads a blocklist, caches it, and returns its contents, falling back to cache if the download fails. The loadFromCache
function reads cached content.
func (bc *BlocklistCache) downloadAndCacheBlocklist(ctx context.Context, url string) []string {
cachePath := bc.getCachePath(url)
// Try to use cached version first
if bc.isCacheValid(cachePath) {
if content, err := os.ReadFile(cachePath); err == nil {
return strings.Split(string(content), "\n")
}
}
// Download fresh content
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
log.Printf("Error creating request for %s: %v", url, err)
return bc.loadFromCache(cachePath)
}
client := &http.Client{Timeout: 30 * time.Second}
resp, err := client.Do(req)
if err != nil {
log.Printf("Error downloading blocklist %s: %v", url, err)
return bc.loadFromCache(cachePath)
}
defer resp.Body.Close()
content, err := io.ReadAll(resp.Body)
if err != nil {
log.Printf("Error reading response from %s: %v", url, err)
return bc.loadFromCache(cachePath)
}
// Cache the content
if err := os.WriteFile(cachePath, content, 0644); err != nil {
log.Printf("Error caching blocklist %s: %v", url, err)
}
return strings.Split(string(content), "\n")
}
func (bc *BlocklistCache) loadFromCache(cachePath string) []string {
if content, err := os.ReadFile(cachePath); err == nil {
return strings.Split(string(content), "\n")
}
return []string{}
}
The isValidDomain
function validates domain names using a regex and length checks. The extractDomainsFromLine
function parses a blocklist line, handling IP-based formats, comments, and AdBlock syntax.
func (bc *BlocklistCache) isValidDomain(domain string) bool {
if len(domain) == 0 || len(domain) > 253 {
return false
}
// Check for valid characters and format
return bc.domainRegex.MatchString(domain)
}
func (bc *BlocklistCache) extractDomainsFromLine(line string) []string {
line = strings.TrimSpace(line)
if line == "" {
return nil
}
// Skip comments
for _, pattern := range bc.commentPatterns {
if pattern.MatchString(line) {
return nil
}
}
var domains []string
// Try IP-based patterns first
for _, pattern := range bc.ipPatterns {
if matches := pattern.FindStringSubmatch(line); len(matches) > 1 {
// Extract all domains from the match
domainPart := strings.TrimSpace(matches[1])
// Handle inline comments
if idx := strings.Index(domainPart, "#"); idx != -1 {
domainPart = strings.TrimSpace(domainPart[:idx])
}
if idx := strings.Index(domainPart, "//"); idx != -1 {
domainPart = strings.TrimSpace(domainPart[:idx])
}
// Split by whitespace to handle multiple domains
parts := strings.Fields(domainPart)
for _, part := range parts {
part = strings.ToLower(strings.TrimSpace(part))
if bc.isValidDomain(part) {
domains = append(domains, part)
}
}
return domains
}
}
// Handle domain-only lines (e.g., AdBlock format)
if strings.Contains(line, "||") {
// AdBlock format: ||domain.com^
line = strings.ReplaceAll(line, "||", "")
line = strings.ReplaceAll(line, "^", "")
line = strings.TrimSpace(line)
}
// Remove protocol prefixes
line = strings.TrimPrefix(line, "http://")
line = strings.TrimPrefix(line, "https://")
line = strings.TrimPrefix(line, "www.")
// Split and validate
parts := strings.Fields(line)
for _, part := range parts {
part = strings.ToLower(strings.TrimSpace(part))
// Remove trailing punctuation
part = strings.TrimRightFunc(part, func(r rune) bool {
return !unicode.IsLetter(r) && !unicode.IsDigit(r) && r != '-' && r != '.'
})
if bc.isValidDomain(part) {
domains = append(domains, part)
}
}
return domains
}
The UpdateBlocklists
function updates all blocklists concurrently using the worker pool, storing domains in a new sync.Map
. The processBlocklistContent
function processes blocklist lines, extracting and storing domains.
func (bc *BlocklistCache) UpdateBlocklists(ctx context.Context, pool *pond.WorkerPool) {
bc.mu.Lock()
defer bc.mu.Unlock()
newDomains := sync.Map{}
var wg sync.WaitGroup
for _, url := range BlocklistURLs {
wg.Add(1)
url := url // capture loop variable
pool.Submit(func() {
defer wg.Done()
content := bc.downloadAndCacheBlocklist(ctx, url)
bc.processBlocklistContent(content, &newDomains)
})
}
wg.Wait()
// Replace the domains map
bc.domains = newDomains
bc.lastUpdate = time.Now()
log.Printf("Updated blocklist with domains loaded")
}
func (bc *BlocklistCache) processBlocklistContent(content []string, domains *sync.Map) {
for _, line := range content {
extractedDomains := bc.extractDomainsFromLine(line)
for _, domain := range extractedDomains {
domains.Store(domain, true)
}
}
}
The IsBlocked
function checks if a domain or its parent domains are in the blocklist, triggering an asynchronous update if the cache is stale.
func (bc *BlocklistCache) IsBlocked(domain Domain) bool {
bc.mu.RLock()
shouldUpdate := time.Since(bc.lastUpdate) > BlocklistCacheTTL
bc.mu.RUnlock()
if shouldUpdate {
// Non-blocking update - use goroutine
go func() {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
pool := pond.New(MaxWorkers, MaxConcurrentTasks)
defer pool.StopAndWait()
bc.UpdateBlocklists(ctx, pool)
}()
}
domain = strings.ToLower(domain)
_, blocked := bc.domains.Load(domain)
// Also check parent domains
if !blocked {
parts := strings.Split(domain, ".")
for i := 1; i < len(parts); i++ {
parentDomain := strings.Join(parts[i:], ".")
if _, parentBlocked := bc.domains.Load(parentDomain); parentBlocked {
blocked = true
break
}
}
}
return blocked
}
The NewDNSServer
function initializes a DNS server instance, setting up the LRU cache, blocklist cache, and worker pool.
func NewDNSServer() *DNSServerStruct {
// Calculate DNS cache entry size
dnsSizeCalc := func(key CacheKey, value []byte) int {
return len(key) + len(value)
}
return &DNSServerStruct{
dnsCache: NewLRUCache(CacheMaxSize, dnsSizeCalc),
blocklistCache: NewBlocklistCache(),
pool: pond.New(MaxWorkers, MaxConcurrentTasks),
}
}
The encodeDomainName
function converts a domain name into DNS packet format, adding length prefixes and a null terminator. The decodeDomainName
function parses a domain name from a packet, handling compression pointers.
func encodeDomainName(domain string) []byte {
if domain == "" {
return []byte{0}
}
var buf bytes.Buffer
parts := strings.Split(domain, ".")
for _, part := range parts {
if len(part) > 63 {
part = part[:63] // Truncate if too long
}
buf.WriteByte(byte(len(part)))
buf.WriteString(part)
}
buf.WriteByte(0) // null terminator
return buf.Bytes()
}
func decodeDomainName(data []byte, offset int) (string, int, error) {
if offset >= len(data) {
return "", offset, fmt.Errorf("offset out of bounds")
}
var parts []string
originalOffset := offset
jumped := false
jumps := 0
for {
if offset >= len(data) {
return "", originalOffset, fmt.Errorf("unexpected end of data")
}
length := data[offset]
// Check for compression (pointer)
if length&0xC0 == 0xC0 {
if offset+1 >= len(data) {
return "", originalOffset, fmt.Errorf("incomplete compression pointer")
}
// Prevent infinite loops
jumps++
if jumps > 10 {
return "", originalOffset, fmt.Errorf("too many compression jumps")
}
pointer := int(binary.BigEndian.Uint16(data[offset:offset+2]) & 0x3FFF)
if !jumped {
originalOffset = offset + 2
jumped = true
}
offset = pointer
continue
}
if length == 0 {
break
}
if length > 63 {
return "", originalOffset, fmt.Errorf("invalid label length")
}
offset++
if offset+int(length) > len(data) {
return "", originalOffset, fmt.Errorf("label extends beyond data")
}
parts = append(parts, string(data[offset:offset+int(length)]))
offset += int(length)
}
if !jumped {
originalOffset = offset + 1
}
return strings.Join(parts, "."), originalOffset, nil
}
The buildQuery
function constructs a DNS query packet with a random transaction ID. The parseQuery
function parses an incoming query, extracting the domain, type, class, and transaction ID.
func buildQuery(domain Domain, queryType QueryType) ([]byte, TransactionID) {
transactionID := TransactionID(time.Now().UnixNano() & 0xFFFF)
var buf bytes.Buffer
// Header
binary.Write(&buf, binary.BigEndian, transactionID)
binary.Write(&buf, binary.BigEndian, uint16(0x0100)) // Standard query
binary.Write(&buf, binary.BigEndian, uint16(1)) // qdcount
binary.Write(&buf, binary.BigEndian, uint16(0)) // ancount
binary.Write(&buf, binary.BigEndian, uint16(0)) // nscount
binary.Write(&buf, binary.BigEndian, uint16(0)) // arcount
// Question
buf.Write(encodeDomainName(domain))
binary.Write(&buf, binary.BigEndian, queryType)
binary.Write(&buf, binary.BigEndian, uint16(ClassIN))
return buf.Bytes(), transactionID
}
func parseQuery(data []byte) (Domain, QueryType, QueryClass, TransactionID, error) {
if len(data) < 12 {
return "", 0, 0, 0, fmt.Errorf("invalid DNS query: too short")
}
header := DNSHeader{
ID: binary.BigEndian.Uint16(data[0:2]),
Flags: binary.BigEndian.Uint16(data[2:4]),
QDCount: binary.BigEndian.Uint16(data[4:6]),
ANCount: binary.BigEndian.Uint16(data[6:8]),
NSCount: binary.BigEndian.Uint16(data[8:10]),
ARCount: binary.BigEndian.Uint16(data[10:12]),
}
if header.QDCount == 0 {
return "", 0, 0, 0, fmt.Errorf("no questions in query")
}
// Parse first question
domain, offset, err := decodeDomainName(data, 12)
if err != nil {
return "", 0, 0, 0, fmt.Errorf("invalid domain name: %w", err)
}
if offset+4 > len(data) {
return "", 0, 0, 0, fmt.Errorf("incomplete question section")
}
queryType := QueryType(binary.BigEndian.Uint16(data[offset : offset+2]))
queryClass := QueryClass(binary.BigEndian.Uint16(data[offset+2 : offset+4]))
return domain, queryType, queryClass, TransactionID(header.ID), nil
}
The queryUpstream
function sends a query to the upstream DNS server (e.g., 9.9.9.9) and returns the response, using a timeout and context deadline.
func (ds *DNSServerStruct) queryUpstream(ctx context.Context, domain Domain, queryType QueryType) ([]byte, error) {
query, _ := buildQuery(domain, queryType)
conn, err := net.DialTimeout("udp", DNSServer, RequestTimeout)
if err != nil {
return nil, fmt.Errorf("failed to connect to upstream DNS: %w", err)
}
defer conn.Close()
// Set deadline for the entire operation
deadline, ok := ctx.Deadline()
if ok {
conn.SetDeadline(deadline)
}
if _, err := conn.Write(query); err != nil {
return nil, fmt.Errorf("failed to send query: %w", err)
}
response := make([]byte, UDPBufferSize)
n, err := conn.Read(response)
if err != nil {
return nil, fmt.Errorf("failed to read response: %w", err)
}
return response[:n], nil
}
These functions generate DNS response packets for blocked domains (NXDOMAIN
), server failures (SERVFAIL
), and format errors (FORMERR
), respectively, ensuring proper header flags and question echoing.
func buildBlockedResponse(transactionID TransactionID, domain string, queryType QueryType, queryClass QueryClass) []byte {
var buf bytes.Buffer
// Header - proper NXDOMAIN response
binary.Write(&buf, binary.BigEndian, transactionID)
// Flags: QR=1 (response), Opcode=0 (query), AA=0, TC=0, RD=1, RA=1, Z=0, RCODE=3 (NXDOMAIN)
flags := FlagQR | FlagRD | FlagRA | RCodeNXDomain
binary.Write(&buf, binary.BigEndian, flags)
binary.Write(&buf, binary.BigEndian, uint16(1)) // QDCount - echo the question
binary.Write(&buf, binary.BigEndian, uint16(0)) // ANCount - no answers
binary.Write(&buf, binary.BigEndian, uint16(0)) // NSCount - no authority records
binary.Write(&buf, binary.BigEndian, uint16(0)) // ARCount - no additional records
// Question section - echo the original question
buf.Write(encodeDomainName(domain))
binary.Write(&buf, binary.BigEndian, queryType)
binary.Write(&buf, binary.BigEndian, queryClass)
return buf.Bytes()
}
func buildServerFailureResponse(transactionID TransactionID, domain string, queryType QueryType, queryClass QueryClass) []byte {
var buf bytes.Buffer
// Header
binary.Write(&buf, binary.BigEndian, transactionID)
// Flags: QR=1 (response), Opcode=0 (query), AA=0, TC=0, RD=1, RA=1, Z=0, RCODE=2 (SERVFAIL)
flags := FlagQR | FlagRD | FlagRA | RCodeServFail
binary.Write(&buf, binary.BigEndian, flags)
binary.Write(&buf, binary.BigEndian, uint16(1)) // QDCount
binary.Write(&buf, binary.BigEndian, uint16(0)) // ANCount
binary.Write(&buf, binary.BigEndian, uint16(0)) // NSCount
binary.Write(&buf, binary.BigEndian, uint16(0)) // ARCount
// Question section
buf.Write(encodeDomainName(domain))
binary.Write(&buf, binary.BigEndian, queryType)
binary.Write(&buf, binary.BigEndian, queryClass)
return buf.Bytes()
}
func buildFormErrorResponse(transactionID TransactionID) []byte {
var buf bytes.Buffer
// Header
binary.Write(&buf, binary.BigEndian, transactionID)
// Flags: QR=1 (response), Opcode=0 (query), AA=0, TC=0, RD=0, RA=0, Z=0, RCODE=1 (FORMERR)
flags := FlagQR | RCodeFormErr
binary.Write(&buf, binary.BigEndian, flags)
binary.Write(&buf, binary.BigEndian, uint16(0)) // QDCount
binary.Write(&buf, binary.BigEndian, uint16(0)) // ANCount
binary.Write(&buf, binary.BigEndian, uint16(0)) // NSCount
binary.Write(&buf, binary.BigEndian, uint16(0)) // ARCount
return buf.Bytes()
}
The validateQuery
function checks the validity of a DNS query, ensuring the domain name, query type, and class meet standards.
func validateQuery(domain string, queryType QueryType, queryClass QueryClass) error {
// Validate domain name
if len(domain) == 0 {
return fmt.Errorf("empty domain name")
}
if len(domain) > 253 {
return fmt.Errorf("domain name too long")
}
// Check for valid characters in domain
for _, r := range domain {
if !unicode.IsLetter(r) && !unicode.IsDigit(r) && r != '.' && r != '-' {
return fmt.Errorf("invalid character in domain name: %c", r)
}
}
// Validate query class (we only support IN class and ANY)
if queryClass != ClassIN && queryClass != ClassANY {
return fmt.Errorf("unsupported query class: %d", queryClass)
}
// Validate query type (basic check for known types)
if queryType == 0 || queryType > 65535 {
return fmt.Errorf("invalid query type: %d", queryType)
}
return nil
}
The extractTTLFromResponse
function parses a DNS response to extract the TTL from the first answer, defaulting to 5 minutes if unavailable or invalid.
func extractTTLFromResponse(response []byte) time.Duration {
if len(response) < 12 {
return DefaultDNSTTL
}
// Parse header to get answer count
anCount := binary.BigEndian.Uint16(response[6:8])
if anCount == 0 {
return DefaultDNSTTL
}
// Skip to answers section
offset := 12
// Skip questions
qdCount := binary.BigEndian.Uint16(response[4:6])
for i := uint16(0); i < qdCount && offset < len(response); i++ {
// Skip domain name
for offset < len(response) && response[offset] != 0 {
if response[offset]&0xC0 == 0xC0 {
offset += 2
break
} else {
offset += int(response[offset]) + 1
}
}
if offset < len(response) && response[offset] == 0 {
offset++
}
offset += 4 // Skip type and class
}
// Parse first answer to get TTL
if offset+10 < len(response) {
// Skip name
for offset < len(response) && response[offset] != 0 {
if response[offset]&0xC0 == 0xC0 {
offset += 2
break
} else {
offset += int(response[offset]) + 1
}
}
if offset < len(response) && response[offset] == 0 {
offset++
}
if offset+8 < len(response) {
offset += 4 // Skip type and class
ttl := binary.BigEndian.Uint32(response[offset : offset+4])
if ttl > 0 && ttl < 86400 { // Max 24 hours
return time.Duration(ttl) * time.Second
}
}
}
return DefaultDNSTTL
}
The handleClient
function processes incoming DNS queries, parsing and validating them, checking the blocklist and cache, querying upstream if needed, and sending responses. It handles errors by sending appropriate response codes.
func (ds *DNSServerStruct) handleClient(ctx context.Context, data []byte, clientAddr *net.UDPAddr) {
// Parse the query
domain, queryType, queryClass, clientTransactionID, err := parseQuery(data)
if err != nil {
log.Printf("Error parsing query from %s: %v", clientAddr, err)
// Send format error response
response := buildFormErrorResponse(clientTransactionID)
ds.conn.WriteToUDP(response, clientAddr)
return
}
// Validate the query
if err := validateQuery(domain, queryType, queryClass); err != nil {
log.Printf("Invalid query from %s: %v", clientAddr, err)
response := buildFormErrorResponse(clientTransactionID)
ds.conn.WriteToUDP(response, clientAddr)
return
}
// Check if domain is blocked
if ds.blocklistCache.IsBlocked(domain) {
log.Printf("Blocked domain requested: %s from %s", domain, clientAddr)
response := buildBlockedResponse(clientTransactionID, domain, queryType, queryClass)
ds.conn.WriteToUDP(response, clientAddr)
return
}
// Generate cache key
cacheKey := fmt.Sprintf("%s:%d:%d", strings.ToLower(domain), queryType, queryClass)
// Check cache
if cachedResponse, found := ds.dnsCache.Get(cacheKey); found {
// Update transaction ID in cached response
if len(cachedResponse) >= 2 {
responseCopy := make([]byte, len(cachedResponse))
copy(responseCopy, cachedResponse)
binary.BigEndian.PutUint16(responseCopy[0:2], uint16(clientTransactionID))
ds.conn.WriteToUDP(responseCopy, clientAddr)
return
}
}
// Query upstream DNS server
response, err := ds.queryUpstream(ctx, domain, queryType)
if err != nil {
log.Printf("Error querying upstream for %s: %v", domain, err)
// Send server failure response
response := buildServerFailureResponse(clientTransactionID, domain, queryType, queryClass)
ds.conn.WriteToUDP(response, clientAddr)
return
}
// Validate response size
if len(response) < 12 {
log.Printf("Invalid response size from upstream for %s", domain)
response := buildServerFailureResponse(clientTransactionID, domain, queryType, queryClass)
ds.conn.WriteToUDP(response, clientAddr)
return
}
// Extract TTL for cache management
cacheTTL := extractTTLFromResponse(response)
// Cache the response (with original transaction ID)
responseCopy := make([]byte, len(response))
copy(responseCopy, response)
ds.dnsCache.Set(cacheKey, responseCopy, cacheTTL)
// Update transaction ID for client response
binary.BigEndian.PutUint16(response[0:2], uint16(clientTransactionID))
// Send response to client
_, err = ds.conn.WriteToUDP(response, clientAddr)
if err != nil {
log.Printf("Error sending response to %s: %v", clientAddr, err)
}
}
The Start
function initializes the DNS server, starting the blocklist update in the background, binding to the UDP port (853), and processing incoming queries using the worker pool. It supports graceful shutdown via context cancellation.
func (ds *DNSServerStruct) Start(ctx context.Context) error {
log.Println("Initializing blocklist cache...")
// Initialize blocklist in background
go func() {
initCtx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
defer cancel()
ds.blocklistCache.UpdateBlocklists(initCtx, pond.New(10, 1000))
log.Println("Blocklist initialization completed")
}()
addr, err := net.ResolveUDPAddr("udp", ServerPort)
if err != nil {
return fmt.Errorf("failed to resolve UDP address: %w", err)
}
ds.conn, err = net.ListenUDP("udp", addr)
if err != nil {
return fmt.Errorf("failed to bind to port: %w", err)
}
defer ds.conn.Close()
// Set read buffer size for better performance
if err := ds.conn.SetReadBuffer(UDPBufferSize * 100); err != nil {
log.Printf("Warning: failed to set read buffer size: %v", err)
}
if err := ds.conn.SetWriteBuffer(UDPBufferSize * 100); err != nil {
log.Printf("Warning: failed to set write buffer size: %v", err)
}
log.Printf("Enhanced DNS server running on port %s...", ServerPort)
log.Printf("Supporting all standard DNS record types including DNSSEC and DANE")
buffer := make([]byte, UDPBufferSize)
for {
select {
case <-ctx.Done():
log.Println("Server shutting down...")
return ctx.Err()
default:
// Set read timeout to allow context checking
ds.conn.SetReadDeadline(time.Now().Add(1 * time.Second))
n, clientAddr, err := ds.conn.ReadFromUDP(buffer)
if err != nil {
// Check if it's a timeout (expected for context checking)
if netErr, ok := err.(net.Error); ok && netErr.Timeout() {
continue
}
log.Printf("Error reading UDP packet: %v", err)
continue
}
// Validate minimum packet size
if n < 12 {
log.Printf("Received packet too small from %s: %d bytes", clientAddr, n)
continue
}
// Copy data for concurrent processing
data := make([]byte, n)
copy(data, buffer[:n])
// Submit to worker pool for processing
ds.pool.Submit(func() {
requestCtx, cancel := context.WithTimeout(ctx, RequestTimeout)
defer cancel()
ds.handleClient(requestCtx, data, clientAddr)
})
}
}
}
The Stop
function gracefully shuts down the server, stopping the worker pool and closing the UDP connection.
func (ds *DNSServerStruct) Stop() {
log.Println("Stopping DNS server...")
if ds.pool != nil {
log.Println("Stopping worker pool...")
ds.pool.StopAndWait()
}
if ds.conn != nil {
log.Println("Closing UDP connection...")
ds.conn.Close()
}
log.Println("DNS server stopped")
}
The GetStats
function returns server statistics, including worker pool metrics like running tasks, idle workers, and completed tasks.
func (ds *DNSServerStruct) GetStats() map[string]interface{} {
stats := make(map[string]interface{})
// Worker pool stats
if ds.pool != nil {
stats["worker_pool_running"] = ds.pool.Running()
stats["worker_pool_idle"] = ds.pool.IdleWorkers()
stats["worker_pool_submitted"] = ds.pool.SubmittedTasks()
stats["worker_pool_completed"] = ds.pool.CompletedTasks()
stats["worker_pool_failed"] = ds.pool.FailedTasks()
}
return stats
}
The main
function sets up a context, creates a DNS server instance, and starts it, handling graceful shutdown. It logs supported features like DNSSEC and DANE.
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
server := NewDNSServer()
defer server.Stop()
// Handle graceful shutdown
go func() {}()
log.Println("Starting Enhanced DNS Server...")
log.Println("Features:")
log.Println("- Complete DNS record type support (A, AAAA, MX, TXT, NS, SOA, etc.)")
log.Println("- DNSSEC record support (DNSKEY, RRSIG, NSEC, DS, etc.)")
log.Println("- DANE support (TLSA, SVCB, HTTPS records)")
log.Println("- Advanced blocklist parsing")
log.Println("- LRU caching with TTL-based expiration")
log.Println("- Concurrent request processing")
if err := server.Start(ctx); err != nil && err != context.Canceled {
log.Fatalf("Server error: %v", err)
}
}
At this point i created a Running func signature in pool.go that I imported from github.com/alitto/pond.
func (p *WorkerPool) Running() interface{} {
return p.RunningWorkers()
}
I also made a simple stress testing and feature testing script in Python to test the server. feel free to upgrade it later with aiohttp.
You need to run in a virtual env first :
pip3 install dnspython requests matplotlib
import socket
import time
import random
import dns.message
import dns.query
import dns.rdatatype
import concurrent.futures
import statistics
import requests
import matplotlib.pyplot as plt
from datetime import datetime
# Configuration
DNS_SERVER = '127.0.0.1' # Your server address
DNS_PORT = 853 # Your server port
TEST_DOMAINS = [
'example.com',
'google.com',
'github.com',
'wikipedia.org',
'amazon.com',
'microsoft.com',
'apple.com',
'cloudflare.com',
'reddit.com',
'twitter.com'
]
BLOCKED_TEST_DOMAINS = [
'doubleclick.net', # Typically in blocklists
'adservice.google.com',
'tracking.example.com'
]
RECORD_TYPES = [
'A', 'AAAA', 'MX', 'TXT', 'NS', 'SOA',
'DNSKEY', 'DS', 'RRSIG', 'HTTPS'
]
CONCURRENT_REQUESTS = 5000 # Number of concurrent requests for load testing
REQUEST_COUNT = 99999 # Total requests for performance testing
def test_basic_query(domain, record_type='A'):
"""Test basic DNS query functionality"""
try:
query = dns.message.make_query(domain, record_type)
response = dns.query.udp(query, DNS_SERVER, port=DNS_PORT, timeout=5)
print(f"\nTest Query: {domain} ({record_type})")
print(f"Response Code: {dns.rcode.to_text(response.rcode())}")
if response.answer:
print("Answers:")
for answer in response.answer:
print(answer)
elif response.authority:
print("Authority:")
for auth in response.authority:
print(auth)
else:
print("No answers in response")
return True
except Exception as e:
print(f"Error testing {domain} ({record_type}): {str(e)}")
return False
def test_blocked_domains():
"""Test if blocked domains are properly handled"""
print("\n=== Testing Blocked Domains ===")
results = []
for domain in BLOCKED_TEST_DOMAINS:
try:
query = dns.message.make_query(domain, 'A')
response = dns.query.udp(query, DNS_SERVER, port=DNS_PORT, timeout=5)
print(f"\nTest Blocked Domain: {domain}")
print(f"Response Code: {dns.rcode.to_text(response.rcode())}")
# Should be NXDOMAIN for blocked domains
is_blocked = response.rcode() == dns.rcode.NXDOMAIN
results.append(is_blocked)
print(f"Properly blocked: {is_blocked}")
except Exception as e:
print(f"Error testing blocked domain {domain}: {str(e)}")
results.append(False)
success_rate = sum(results) / len(results) * 100
print(f"\nBlocked domain test success rate: {success_rate:.2f}%")
return success_rate
def test_record_types():
"""Test support for different DNS record types"""
print("\n=== Testing Record Type Support ===")
results = []
for record_type in RECORD_TYPES:
domain = random.choice(TEST_DOMAINS)
try:
query = dns.message.make_query(domain, record_type)
response = dns.query.udp(query, DNS_SERVER, port=DNS_PORT, timeout=5)
print(f"\nTest Record Type: {record_type} for {domain}")
print(f"Response Code: {dns.rcode.to_text(response.rcode())}")
# Consider it successful if we get a response, even if no data
is_success = response.rcode() in [dns.rcode.NOERROR, dns.rcode.NXDOMAIN]
results.append(is_success)
print(f"Supported: {is_success}")
except Exception as e:
print(f"Error testing record type {record_type}: {str(e)}")
results.append(False)
success_rate = sum(results) / len(results) * 100
print(f"\nRecord type support success rate: {success_rate:.2f}%")
return success_rate
def measure_query_time(domain, record_type='A'):
"""Measure the time taken for a single DNS query"""
start_time = time.time()
try:
query = dns.message.make_query(domain, record_type)
response = dns.query.udp(query, DNS_SERVER, port=DNS_PORT, timeout=5)
elapsed = (time.time() - start_time) * 1000 # Convert to milliseconds
if response.rcode() != dns.rcode.NOERROR:
return None # Don't count failed queries in performance metrics
return elapsed
except:
return None
def test_performance():
"""Test the performance of the DNS server"""
print("\n=== Testing Performance ===")
latencies = []
successes = 0
# Warm up the cache
for domain in TEST_DOMAINS:
measure_query_time(domain)
# Measure performance
for _ in range(REQUEST_COUNT):
domain = random.choice(TEST_DOMAINS)
latency = measure_query_time(domain)
if latency is not None:
latencies.append(latency)
successes += 1
if latencies:
avg_latency = statistics.mean(latencies)
min_latency = min(latencies)
max_latency = max(latencies)
std_dev = statistics.stdev(latencies) if len(latencies) > 1 else 0
print(f"\nPerformance Results ({successes} successful requests):")
print(f"Average latency: {avg_latency:.2f} ms")
print(f"Minimum latency: {min_latency:.2f} ms")
print(f"Maximum latency: {max_latency:.2f} ms")
print(f"Standard deviation: {std_dev:.2f} ms")
# Plot histogram
plt.hist(latencies, bins=20)
plt.title('DNS Query Latency Distribution')
plt.xlabel('Latency (ms)')
plt.ylabel('Frequency')
plt.savefig('dns_latency_distribution.png')
plt.close()
return avg_latency
else:
print("No successful requests to measure performance")
return None
def test_concurrent_requests():
"""Test how the server handles concurrent requests"""
print("\n=== Testing Concurrent Requests ===")
latencies = []
with concurrent.futures.ThreadPoolExecutor(max_workers=CONCURRENT_REQUESTS) as executor:
futures = [executor.submit(measure_query_time, random.choice(TEST_DOMAINS))
for _ in range(CONCURRENT_REQUESTS)]
for future in concurrent.futures.as_completed(futures):
latency = future.result()
if latency is not None:
latencies.append(latency)
if latencies:
avg_latency = statistics.mean(latencies)
min_latency = min(latencies)
max_latency = max(latencies)
print(f"\nConcurrency Test Results ({len(latencies)} successful requests):")
print(f"Average latency: {avg_latency:.2f} ms")
print(f"Minimum latency: {min_latency:.2f} ms")
print(f"Maximum latency: {max_latency:.2f} ms")
return avg_latency
else:
print("No successful requests in concurrency test")
return None
def test_cache_performance():
"""Test the caching performance"""
print("\n=== Testing Cache Performance ===")
# First query (should be cache miss)
start_time = time.time()
domain = random.choice(TEST_DOMAINS)
measure_query_time(domain)
first_query_time = (time.time() - start_time) * 1000
# Second query (should be cache hit)
start_time = time.time()
measure_query_time(domain)
second_query_time = (time.time() - start_time) * 1000
print(f"\nCache Performance:")
print(f"First query (cache miss): {first_query_time:.2f} ms")
print(f"Second query (cache hit): {second_query_time:.2f} ms")
print(f"Improvement: {(first_query_time - second_query_time):.2f} ms ({((first_query_time - second_query_time)/first_query_time*100):.2f}% faster)")
return first_query_time, second_query_time
def run_full_test_suite():
"""Run all tests and generate a report"""
print("=== Starting DNS Server Test Suite ===")
print(f"Testing server at {DNS_SERVER}:{DNS_PORT}")
print(f"Start time: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
test_results = {}
# Feature Tests
print("\n=== Running Feature Tests ===")
test_results['blocked_domains'] = test_blocked_domains()
test_results['record_types'] = test_record_types()
# Basic functionality test
print("\n=== Running Basic Functionality Test ===")
for domain in TEST_DOMAINS[:3]: # Test first 3 domains
test_basic_query(domain)
# Performance Tests
print("\n=== Running Performance Tests ===")
test_results['single_thread_perf'] = test_performance()
test_results['concurrent_perf'] = test_concurrent_requests()
test_results['cache_perf'] = test_cache_performance()
# Generate report
print("\n=== Test Summary ===")
print(f"Blocked domain success rate: {test_results['blocked_domains']:.2f}%")
print(f"Record type support success rate: {test_results['record_types']:.2f}%")
if test_results['single_thread_perf']:
print(f"\nAverage query latency: {test_results['single_thread_perf']:.2f} ms")
if test_results['concurrent_perf']:
print(f"Average concurrent query latency: {test_results['concurrent_perf']:.2f} ms")
if test_results['cache_perf']:
miss, hit = test_results['cache_perf']
print(f"Cache performance: {hit:.2f} ms (hit) vs {miss:.2f} ms (miss)")
print(f"\nTest suite completed at: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
if __name__ == "__main__":
run_full_test_suite()
➜ dizer python3 stress.py
=== Starting DNS Server Test Suite ===
Testing server at 127.0.0.1:853
Start time: 2025-06-22 13:15
=== Running Feature Tests ===
=== Testing Blocked Domains ===
Test Blocked Domain: doubleclick.net
Response Code: NXDOMAIN
Properly blocked: True
Test Blocked Domain: adservice.google.com
Response Code: NXDOMAIN
Properly blocked: True
Test Blocked Domain: tracking.example.com
Response Code: NXDOMAIN
Properly blocked: True
Blocked domain test success rate: 100.00%
=== Testing Record Type Support ===
Test Record Type: A for google.com
Response Code: NOERROR
Supported: True
Test Record Type: AAAA for amazon.com
Response Code: NOERROR
Supported: True
Test Record Type: MX for example.com
Response Code: NOERROR
Supported: True
Test Record Type: TXT for github.com
Response Code: NOERROR
Supported: True
Test Record Type: NS for reddit.com
Response Code: NOERROR
Supported: True
Test Record Type: SOA for twitter.com
Response Code: NOERROR
Supported: True
Test Record Type: DNSKEY for github.com
Response Code: NOERROR
Supported: True
Test Record Type: DS for reddit.com
Response Code: NOERROR
Supported: True
Test Record Type: RRSIG for apple.com
Response Code: SERVFAIL
Supported: False
Test Record Type: HTTPS for github.com
Response Code: NOERROR
Supported: True
Record type support success rate: 90.00%
=== Running Basic Functionality Test ===
Test Query: example.com (A)
Response Code: NOERROR
Answers:
example.com. 190 IN A 23.192.228.84
example.com. 190 IN A 96.7.128.198
example.com. 190 IN A 23.215.0.136
example.com. 190 IN A 23.215.0.138
example.com. 190 IN A 96.7.128.175
example.com. 190 IN A 23.192.228.80
Test Query: google.com (A)
Response Code: NOERROR
Answers:
google.com. 230 IN A 216.58.203.14
Test Query: github.com (A)
Response Code: NOERROR
Answers:
github.com. 19 IN A 140.82.112.3
=== Running Performance Tests ===
=== Testing Performance ===
Performance Results (51268 successful requests):
Average latency: 0.33 ms
Minimum latency: 0.15 ms
Maximum latency: 23.15 ms
Standard deviation: 0.45 ms
=== Testing Concurrent Requests ===
Concurrency Test Results (3014 successful requests):
Average latency: 19.74 ms
Minimum latency: 0.40 ms
Maximum latency: 673.84 ms
=== Testing Cache Performance ===
Cache Performance:
First query (cache miss): 0.87 ms
Second query (cache hit): 0.37 ms
Improvement: 0.51 ms (57.85% faster)
=== Test Summary ===
Blocked domain success rate: 100.00%
Record type support success rate: 90.00%
Average query latency: 0.33 ms
Average concurrent query latency: 19.74 ms
Cache performance: 0.37 ms (hit) vs 0.87 ms (miss)
Test suite completed at: 2025-06-22
I am aware that there are some issues, They will get resolved when I get time. Hope you liked this nice saturday evening code. You are free to use it in your org/home as long as you follow the license.
2025-06-21 09:50:00
Image attribution: MB&F, CC BY-SA 4.0, via Wikimedia Commons
Here's a thought experiment. Say I give you a clock and I tell you it's stopped. Is that useful? Can you use it to tell time?
You'd say no, right? Even though, there are two times over the course of the day when the clock is indeed right. A stopped clock doesn't tell time even if it is 'right' twice a day. Which is to say, being right isn't the same thing as being useful.
Here's a thought experiment. Say I give you a clock that tells time correctly 50% of the time. The other 50% of the time, it's 5:04. Is that useful? Can you use it to tell time?
Surely yes, right? You just look at it over and over again until it's not showing 5:04, and there's the time. If it actually is 5:04, well, that's an edge case, but you can handle that by just waiting a minute. It's not as good as a properly functioning clock, but it is a decent approximation of one.
Here's a thought experiment. Say I give you a clock that tells time correctly 50% of the time. The other 50% of the time, it shows you any other time at random. Is that useful? Can you use it to tell time?
Well, yes, if you think about it, right? You can take a sample of, say, 10 or 20 readings and you know the matching ones are right and the random ones are wrong.
Here's a thought experiment. Say I give you a clock that tells time correctly 80% of the time. The other 20% of the time, it's wrong, but not random; it just unpredictably runs fast or slow over the course of the day, such that it may or may not actually be lined up with the actual standard time, even though readings from it look coherent in isolation. Is that useful? Can you use it to tell time?
Well, not quite, but almost, surely? You can tell time 'with 80% certainty.' That's probably good enough for most applications. You could figure out what the margin of error is and just live with it. Most of us don't really plot out our movements throughout the day down to the minute, right? Our precision in arriving places is more like down to the quarter hour or so.
Here's a thought experiment. Say I give you a clock being operated by a minute, malicious demon. It knows that if it just showed you the wrong time all the time, you'd figure out that it's a broken clock and chuck it. It tells time correctly, oh, about 80% of the time - but you can't know the exact odds, and really they will change based on circumstance. It almost always gives you a time that is plausible. It mostly won't try to insist that it's 5:07 when the sun is directly overhead, for example. But it will, sometimes, be wrong; sometimes significantly. You can't know when or how often.
Is that useful. Can you use it to tell time.
I would argue that no, it isn't; it's not giving you any actual information about the time, not even the probabilistic information that the 80% clock was giving you. If you found yourself in a situation where telling time is actually difficult – say you're scuba diving, or at a very high latitude – the clock could be leading you astray by hours.
But: it is very easy to act like the clock is useful, isn't it? I mean, yeah, sometimes you're late to things. But usually it's nothing too important. Does it, ultimately, really matter what time it is? The clock tells you a time, and that, in a sense, solves the problem you had where you didn't know what time it was.
Is that enough?
2025-06-21 01:40:00
Something I try to be mindful of is how easy the steps towards recreating social media sadly are, despite not wanting to outright do that.
I don’t always get the impression on this side of the web that there’s awareness of this development, which makes sense: Not everyone chose to leave social media services because they were fed up with the concept; many were just displaced or are still there. They see no issues with the social media platform design itself, just the owner or userbase; so having a replica with nicer people and a better owner works well for them.
That’s where I disagree, because I absolutely think some design choices that are fine in a vacuum are absolutely terrible together, encouraging toxic behavior and addiction - which is also why I beg the IndieWeb not to reinvent the torment nexus wheel in their own flavor.
It’s so easy to have a solid space on the web away from it all and slowly creep your way back into the mess just like the social media platforms did. Like hey, what if we added a way to discover writing in a feed? Fine. What if we added likes? Teetering on the line, but fine. Adding comments? That’s now an early version of Facebook. Ability to reblog? That’s a Tumblr clone. Then comes creator income and subscriber benefits. And possibly ads. Obviously almost all those features are great on their own and in theory sound amazing together, but here we are, knowing they’re a slippery slope to weird engagement farming, user manipulation, toxicity and platform enshittification.
I really value that things are different, slower, and some features are intentionally missing or optional. Also reminds me of Far Cry 3’s “definition of insanity” dialogue - we can’t do the same thing over and over, expecting it to turn out differently. There were and are enough clones promising to be X but better, and it’s really not. We have tried adding this or that feature and it was shit.
I’m glad when there are projects and spaces who remain different and purposefully don’t seek to implement most social media features. You really have to draw a line and remember how the switch from forums to social media and Reddit also felt like a good idea with lots of comfort and amazing community opportunities until it wasn’t. Now we have customer support and docs on Discord.
Not here to demonize specific features on their own - it would be silly to imply that one thing alone will ruin a space, but it does affect it. You have to judge if it will replicate known issues in the context it’s imbedded in and what it will encourage the userbase to do. It’s easy to put on the rose-colored glasses about it all, just like personalized feed algorithms can be described as great because it shows more of what the user wants to see. This doesn’t address the issues around manipulating algorithms, echo chambers and radicalization pipelines, though.
All of that is why I keep being skeptical about implementing “nice to have” features we are used to from other spaces online. It’s desirable not to flatten everything into another version of something - I don’t want X for bloggers or whatever. I’m content as it is.
Reply via email
Published {{ post_published_date }}
2025-06-20 08:54:00
I'm not gonna lie to you guys, I've been struggling!
Hold on, not with Japanese, well... sort of, but not with the learning process, I've been struggling with my schedule.
I hate slowing down for a second time this month, but there isn't much that I can do about it.
I haven't stopped doing my daily Anki, with Kaishi 1.5k and RTK 450.
It's been really fun, I particularly enjoy RTK, I love mnemonics!!
Sadly, I haven't had the time to properly watch and process Cure Dolly's classes for the past 4 days. Sorry, 先生!
I tried to watch some classes one night, I thought that I could maybe go to sleep a bit later than usual, but I had such a hard time focusing, and I'm not even sure if I properly absorbed the information.
I've also been skipping my immersion videos.
It really sucks, but it's a bit comforting to know that it's out of my control.
Wow, Skeleton. You sure are a lazy guy! You lasted 2 weeks!
Hey! Calm down! I just struggled a bit this week, that's all.
Plus, I got some great news. Starting next month, quite literally on the 1st of July, my schedule is changing, and I'll be able to study in the morning. I'm a great early bird, so I'll have a way better time studying. Also, it's way easier to have lunch late, than going to bed late.
I won't be starting the next challenge of the "30 Day Japanese" routine tonight. I'll be talking about it in some days, most likely after this weekend!
What other updates should I give you today? Let me think...
Oh yeah! It feels as if my Anki app is alive. The days that I am struggling with my schedule, it decides to teach me the kanji for the numbers 1 to 10, which I already knew! Strange... but thanks Anki!
I have some other updates, but they're more related to the blog as a whole, so I shouldn't put them in a "Learning Japanese" post.
Also, please don't kill me, but I didn't realize that the routine I'm following asked us to install Yomitan, and I only did it 2 days ago... It's really useful! It's a pop-up dictionary extension for web browsers.
Here's a screenshot from the installation guide:
You just hover over Japanese text and hold down the shift key. Pretty cool, huh?
That's all that I've been up to these past couple of days, sorry about delaying the next challenge, but it's necessary!
\
2025-06-20 08:03:00
this zine is free for you to print at home if you like <3 or if you prefer snail mail, you can get a physical copy! 💌