How Kubernetes DNS works internally

Every Kubernetes cluster runs a DNS add-on — almost universally CoreDNS since Kubernetes 1.13. CoreDNS is deployed as a Deployment in the kube-system namespace, fronted by a Service named kube-dns (kept for backwards compatibility). The cluster CIDR assigns it a predictable IP — usually 10.96.0.10 — and every Pod’s /etc/resolv.conf is automatically configured to point at it.

CoreDNS watches the Kubernetes API via a plugin called kubernetes. Every time a Service or Pod is created, updated, or deleted, CoreDNS updates its in-memory zone data accordingly — no static zone files, no reload signals, no downtime.

Service DNS: the record everyone knows

For a Service named payments in the billing namespace, CoreDNS creates an A record:

payments.billing.svc.cluster.local  →  10.96.45.12  (ClusterIP)

From any Pod in the same cluster you can reach it at the short name payments (within the same namespace), payments.billing, or the fully-qualified payments.billing.svc.cluster.local. Kubernetes also creates SRV records for named ports, enabling service-discovery patterns where the port itself is resolved by name.

Pod DNS: the record almost nobody knows

Here’s the part that surprises most engineers: CoreDNS creates an A record for every running Pod, using the Pod’s IP address with dashes instead of dots:

# Pod with IP 10.244.2.17 in namespace "billing"
10-244-2-17.billing.pod.cluster.local  →  10.244.2.17

This exists primarily so that Pods can be addressed by name over TLS — useful when you need a stable hostname in a certificate SAN. It’s also the mechanism that StatefulSets expose: a StatefulSet named db with three replicas gives you predictable Pod DNS names like db-0.db.billing.svc.cluster.local, db-1.db.billing.svc.cluster.local, and db-2.db.billing.svc.cluster.local. Databases love this — it’s how Cassandra, Kafka, and etcd bootstrap cluster membership inside Kubernetes.

Headless Services: skipping the virtual IP

A normal Service has a ClusterIP — a virtual IP managed by kube-proxy or eBPF. A headless Service sets clusterIP: None, which tells Kubernetes: “don’t allocate a VIP; instead return the actual Pod IPs directly in DNS.”

apiVersion: v1
kind: Service
metadata:
  name: db
  namespace: billing
spec:
  clusterIP: None          # ← headless
  selector:
    app: db

With a headless Service, a DNS query for db.billing.svc.cluster.local returns multiple A records — one per matching Pod. The client receives all of them and can implement its own load-balancing or connect directly to a specific replica by Pod hostname. StatefulSets require a headless Service for exactly this reason.

DNS policies: four flavours

Each Pod can declare a dnsPolicy field in its spec. The four options are:

ndots:5 — why your app makes five DNS queries for one hostname

Open /etc/resolv.conf inside any Kubernetes Pod and you’ll see something like this:

nameserver 10.96.0.10
search billing.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

The ndots:5 option means: “if the hostname has fewer than 5 dots, treat it as a relative name and try each search domain before attempting it as absolute.” That produces up to five DNS queries for a single lookup of, say, api.example.com (3 dots < 5):

  1. api.example.com.billing.svc.cluster.local → NXDOMAIN
  2. api.example.com.svc.cluster.local → NXDOMAIN
  3. api.example.com.cluster.local → NXDOMAIN
  4. api.example.com. → answer returned

That’s four round-trips to CoreDNS before your app gets an IP. On a high-throughput microservice making thousands of external calls per second, this is measurable latency. The fix is simple: use a trailing dot to signal an absolute name (api.example.com.), or set dnsConfig.options: [{name: ndots, value: "1"}] for Pods that only call external services.

How to debug DNS inside a cluster

The canonical tool is a throwaway Pod running the dnsutils image, which ships nslookup, dig, and host:

kubectl run dnsutils \
  --image=registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3 \
  --restart=Never -it --rm \
  -- /bin/sh

Inside the shell you can validate Service resolution, Pod resolution, and external forwarding in seconds:

# Resolve a Service
nslookup payments.billing.svc.cluster.local

# Resolve a Pod by its dashed IP
nslookup 10-244-2-17.billing.pod.cluster.local

# Check CoreDNS is reachable
nslookup kubernetes.default

# Diagnose ndots with dig
dig +search api.example.com
Pro tip: if CoreDNS is timing out, check its logs with kubectl logs -n kube-system -l k8s-app=kube-dns and look for SERVFAIL or i/o timeout entries pointing at the upstream forwarder.

Why this matters for your certification

Kubernetes DNS is a favourite topic across all three CNCF practitioner exams. If you’re studying for CKA, CKAD, or CKS, expect hands-on tasks like:

The dnsutils technique above is explicitly mentioned in the official Kubernetes documentation and regularly appears as the expected debugging approach in exam scenarios. Know it cold.

Key takeaways