Master every domain of the Certified Kubernetes Application Developer exam. This course covers core concepts, configuration, multi-container pods, observability, pod design, services and networking, state persistence, and Helm-based application deployment with real-world examples and exam-aligned explanations.
Kubernetes pod design, deployment strategies, and application developer patterns — explained for your commute or workout. New episodes weekly.
Listen on Spotifykubectl cluster-info and kubectl get nodes to verify cluster health during the exam.
apiVersion: v1, kind: Pod, metadata (name, namespace, labels), and spec (containers array with name, image, ports, resources, env, volumeMounts). Master the imperative shortcut: kubectl run nginx --image=nginx --dry-run=client -o yaml to generate manifests quicklyrestartPolicy: Always (default, used by Deployments), OnFailure (used by Jobs, restarts only on non-zero exit), Never (container is not restarted). The kubelet implements exponential backoff for restart delays: 10s, 20s, 40s, up to 5 minuteskubectl run mypod --image=nginx --dry-run=client -o yaml > pod.yaml. Practice creating, inspecting, and deleting Pods until it is second nature. Use kubectl describe pod [name] to view events and diagnose startup failures, and kubectl get pod [name] -o yaml to see the full spec including default values injected by the API server.
default (where resources go if no namespace is specified), kube-system (control plane components), kube-public (publicly readable data), and kube-node-lease (node heartbeat leases). Create with kubectl create namespace [name]kubectl api-resources --namespaced=true to list namespace-scoped resourceskubectl get (list resources), kubectl describe (detailed info), kubectl create (imperative creation), kubectl apply -f (declarative), kubectl delete (remove), kubectl edit (in-place edit), kubectl explain (API field docs). Set a default namespace with kubectl config set-context --current --namespace=[name]apiVersion, kind, metadata, spec (desired state), and status (actual state, managed by Kubernetes). Use kubectl explain pod.spec.containers to explore the API schema during the examapiVersion: v1 (Pods, Services, ConfigMaps). Apps group uses apps/v1 (Deployments, StatefulSets, DaemonSets). Batch group uses batch/v1 (Jobs, CronJobs). Networking uses networking.k8s.io/v1 (Ingress, NetworkPolicy). Use kubectl api-versions to list available API versionskubectl config get-contexts and switch namespaces as needed. A common mistake is creating a resource in the wrong namespace. Use the -n [namespace] flag explicitly or set the default namespace for your context. The kubectl explain command is your built-in documentation — use it liberally instead of guessing field names.
kubectl create configmap [name] --from-literal=key=value or --from-file=config.properties. ConfigMaps decouple configuration from container images, enabling environment-specific deployments without rebuilding imagesenvFrom (all keys) or env.valueFrom.configMapKeyRef (specific keys). Mount as volume files using volumes and volumeMounts — each key becomes a file whose content is the value. Volume-mounted ConfigMaps are automatically updated when the ConfigMap changes (with a brief propagation delay)kubectl create secret generic [name] --from-literal=password=mysecret. Types include Opaque (generic), kubernetes.io/tls (TLS cert and key), and kubernetes.io/dockerconfigjson (image pull credentials)secretKeyRef) or volume mounts. When mounted as volumes, Secrets are stored in a tmpfs (in-memory filesystem) so they are never written to disk on the node. Prefer volume mounts over environment variables because env vars can leak through logs and crash dumpsimmutable: true prevents accidental changes, improves cluster performance by reducing API server watch load, and protects critical configuration. Once marked immutable, the resource must be deleted and recreated to change itkubectl create configmap and kubectl create secret generic with --dry-run=client -o yaml to generate manifests quickly. Always verify injection with kubectl exec [pod] -- env or kubectl exec [pod] -- cat /path/to/mounted/file.
spec.containers[].resources.requests with cpu (millicores, e.g., 250m = 0.25 CPU) and memory (bytes, e.g., 128Mi)spec.containers[].resources.limitskubectl describe node capacity vs. allocated amounts. Remember the units: 1 CPU = 1000m, memory uses Mi (mebibytes) or Gi (gibibytes). Setting requests too high wastes cluster resources; setting them too low causes eviction under pressure.
runAsUser (UID for all containers), runAsGroup (GID for all containers), fsGroup (GID applied to all mounted volumes, ensuring files are accessible by the specified group), and supplementalGroups for additional group membershipsrunAsNonRoot: true (rejects the container if it tries to run as root), readOnlyRootFilesystem: true (prevents writes to the container filesystem), allowPrivilegeEscalation: false (blocks setuid/setgid binaries), and privileged: falsecapabilities.drop: ["ALL"] and selectively add only what is needed, such as NET_BIND_SERVICE (bind to ports below 1024) or SYS_TIME. This follows the principle of least privilegedefault ServiceAccount. Create dedicated ServiceAccounts for workloads that need API access: kubectl create serviceaccount [name]. Assign to a Pod via spec.serviceAccountNameautomountServiceAccountToken: false on Pods that do not need API access to reduce the attack surfaceemptyDir volume at a known path. The sidecar container tails the log files and forwards them to a centralized logging system (Elasticsearch, Loki). This decouples log shipping from application code and allows different logging backends without changing the appemptyDir: {}) at the Pod level, then add matching volumeMounts in both the main and sidecar containers pointing to their respective mount paths. Remember that sidecar containers run for the entire lifetime of the Pod, unlike init containers which run to completion before the app starts.
spec.initContainers, they run sequentially and must each complete successfully (exit code 0) before the next init container starts and before any app containers begin. If an init container fails, the kubelet restarts it according to the Pod's restart policy. They share volumes with app containers but have their own image and commanduntil nslookup mydb; do sleep 2; done), run database schema migrations before the app starts, fetch configuration or secrets from a vault, generate certificates, or set filesystem permissions on shared volumesspec.initContainers (not spec.containers) and run to completion in order. If you need to debug init container failures, use kubectl logs [pod] -c [init-container-name]. The ambassador and adapter patterns are tested conceptually — understand when to use each and how they differ from a basic sidecar.
failureThreshold consecutive times, the kubelet kills the container and restarts it according to the restart policy. Use for applications that can enter an unrecoverable state without crashing (infinite loops, deadlocks)failureThreshold * periodSeconds to allow sufficient startup time, then hand off to the liveness probe for ongoing health checkinghttpGet (sends an HTTP GET request; success = 2xx/3xx response), tcpSocket (attempts a TCP connection to the specified port; success = connection established), exec (runs a command inside the container; success = exit code 0). Choose based on what the application exposesinitialDelaySeconds (wait before first probe), periodSeconds (interval between probes, default 10), timeoutSeconds (probe timeout, default 1), successThreshold (consecutive successes to mark healthy, default 1), failureThreshold (consecutive failures before action, default 3)spec.containers[].livenessProbe, readinessProbe, or startupProbe. A frequent mistake is setting initialDelaySeconds too low, causing the liveness probe to kill the container before it finishes starting. Use a startup probe for slow applications instead of inflating the liveness probe's initial delay.
kubectl logs [pod-name]. For real-time streaming, add -f (follow). To see previous container logs after a restart, use --previous. Limit output with --tail=N or --since=1hkubectl logs [pod-name] -c [container-name]. To view logs from all containers simultaneously, use --all-containers=true. This is essential for debugging sidecar and init container issues/var/log/containers/). The kubelet or container runtime handles log rotation based on file size and count. Cluster-level logging requires deploying a DaemonSet (Fluentd, Fluent Bit) or a sidecar-based approachkubectl logs to debug failing containers. Practice the flags: -c for multi-container Pods, --previous to see logs from crashed containers, and -f for live streaming. If the question asks why a container keeps restarting, the first step is always kubectl logs [pod] --previous to see the output from the last crashed instance. Combine with kubectl describe pod to read events and exit codes.
kubectl top node) and Pods (kubectl top pod). Requires the Metrics Server to be installed in the cluster. Add --containers to see per-container resource usage within a Pod. Use this to identify resource-hungry Pods and validate that requests/limits are sized correctlykubectl describe pod [name] shows scheduling decisions, pull status, probe results, and restart reasons. Events at the bottom are chronologically ordered and are the first place to check when a Pod is not behaving as expectedkubectl get events --sort-by=.metadata.creationTimestamp. Events are stored for one hour by default. They reveal the sequence of operations that led to the current statekubectl debug -it [pod-name] --image=busybox --target=[container-name]. Shares the process namespace of the target container, allowing you to inspect processes and the filesystemkubectl get pod, examine events and conditions with kubectl describe pod, review container logs with kubectl logs, verify resource usage with kubectl top pod, and if needed, shell into the container with kubectl exec -it [pod] -- /bin/sh or use ephemeral containers for minimal imageskubectl get pod to check status and restarts, (2) kubectl describe pod to read events and conditions, (3) kubectl logs to view application output, (4) kubectl exec to inspect the running container. The kubectl debug command with ephemeral containers is a powerful tool when the base image has no shell. Practice these commands until they are muscle memory.
app: frontend, env: production, tier: web. Labels are used by Services, Deployments, ReplicaSets, and NetworkPolicies to select which Pods to target. Add labels to a running resource with kubectl label pod [name] key=value=, ==, or != operators. Example: kubectl get pods -l env=production returns all Pods with the label env: production. Multiple requirements are ANDed together: -l env=production,tier=frontend matches both conditionsin (env in (production, staging)), notin (env notin (development)), and exists/!exists (kubectl get pods -l 'release' returns Pods that have the release label regardless of value). Used in Deployments with matchExpressionsapp.kubernetes.io/name (app name), app.kubernetes.io/instance (instance identifier), app.kubernetes.io/version (version), app.kubernetes.io/component (role within architecture), app.kubernetes.io/managed-by (tool managing the resource). These enable consistent tooling integrationnginx.ingress.kubernetes.io/rewrite-target: /), and tool-specific metadata. Annotations can hold larger values than labelskubectl get pods --show-labels to verify labels and kubectl label pod [name] key=value --overwrite to modify them. Remember: labels are for Kubernetes to select objects, annotations are for humans and tools to store metadata.
kubectl create deployment [name] --image=[image] --replicas=3maxSurge (how many Pods can exist above the desired count during the update, default 25%) and maxUnavailable (how many Pods can be unavailable during the update, default 25%). This ensures zero-downtime deployments by keeping a minimum number of Pods always availablekubectl rollout status deployment/[name]. See revision history with kubectl rollout history deployment/[name]. Roll back to the previous revision with kubectl rollout undo deployment/[name] or to a specific revision with --to-revision=N. Pause and resume rollouts with kubectl rollout pause/resumekubectl scale deployment [name] --replicas=5. Update the image with kubectl set image deployment/[name] [container]=[new-image]. Every change to the Pod template creates a new ReplicaSet and triggers a rollout. Use kubectl rollout history with --revision=N to inspect what changed in each revisionrevisionHistoryLimit field (default 10) controls how many old ReplicaSets are retained. Use kubectl rollout history deployment/[name] --revision=2 to see the exact Pod template for a specific revision.
completions (total successful runs needed, default 1), parallelism (how many Pods run concurrently, default 1), backoffLimit (maximum retries before marking the Job as failed, default 6). The restart policy must be Never or OnFailureDeadlineExceeded. This prevents runaway Jobs from consuming cluster resources indefinitelyschedule field uses five fields: minute (0-59), hour (0-23), day of month (1-31), month (1-12), day of week (0-6). Example: "0 2 * * *" runs daily at 2 AM. CronJobs manage Job creation and cleanup based on successfulJobsHistoryLimit (default 3) and failedJobsHistoryLimit (default 1)concurrencyPolicy: Allow (default, multiple Jobs can run simultaneously), Forbid (skip the new Job if the previous one is still running), Replace (cancel the running Job and start a new one). Choose based on whether overlapping executions are safe for your workloadparallelism to N with completions unset for Pods that pull from a shared queue. Indexed Jobs: each Pod gets a unique index (0 to completions-1) via the JOB_COMPLETION_INDEX env var. Use ttlSecondsAfterFinished to automatically clean up completed Jobs and their Podskubectl create job [name] --image=[image] -- [command] and CronJobs with kubectl create cronjob [name] --image=[image] --schedule="*/5 * * * *" -- [command]. A common pitfall is forgetting to set restartPolicy: Never or OnFailure — Jobs cannot use the default Always policy. Verify CronJob schedules by checking the next scheduled run time in kubectl get cronjob.
[service-name].[namespace].svc.cluster.local) and load balancing across matching Pods. Create with kubectl expose deployment [name] --port=80 --target-port=8080[node-ip]:[node-port]. Useful for development and on-premises environments where a cloud load balancer is not availableexternalName: my-database.example.com) without proxying. Headless Services (clusterIP: None) do not allocate a cluster IP; instead, DNS returns the Pod IPs directly. Used by StatefulSets for stable DNS per Podkubectl get endpoints [service-name]. If a Service has no selector, you can manually create an Endpoints object to point to external IPs or services outside the clusterkubectl expose with the correct port and target-port. Remember that port is what the Service listens on and targetPort is what the container listens on — they can be different. If a Service is not reaching its Pods, verify that the selector labels match the Pod labels using kubectl describe svc [name] and check the Endpoints list. An empty Endpoints list means no Pods match the selector.
app.example.com/api to the api-service and app.example.com/web to the web-service. The defaultBackend handles requests that do not match any rule. Each rule specifies a host, path, pathType, and the backend service name and portPrefix matches URL paths based on a prefix split by / (e.g., /api matches /api, /api/, and /api/users). Exact matches the exact URL path only. ImplementationSpecific defers to the ingress controller's own matching logic. Always specify pathType as it is required in networking.k8s.io/v1tls section with the hostname and the Secret name. The Secret must be of type kubernetes.io/tls with tls.crt and tls.key data fields. Traffic between the Ingress controller and backend Services is typically unencrypted (within the cluster)nginx.ingress.kubernetes.io/rewrite-target: / (rewrite the URL path), nginx.ingress.kubernetes.io/ssl-redirect: "true" (force HTTPS), nginx.ingress.kubernetes.io/proxy-body-size: "10m" (max request body). Annotations vary by controllerapiVersion: networking.k8s.io/v1, kind: Ingress, with rules containing host, http.paths with path, pathType, and backend.service.name/port.number. Use kubectl explain ingress.spec.rules during the exam to check field names. A frequent mistake is putting the port name instead of the port number in the backend.
podSelector: {}) selects all Pods in the namespace. Example: podSelector: { matchLabels: { app: db } } applies the policy only to Pods with the app: db label. Once a Pod is selected by any NetworkPolicy, all traffic not explicitly allowed is deniedfrom with combinations of podSelector (Pods in the same namespace), namespaceSelector (Pods in specific namespaces), and ipBlock (CIDR ranges for external traffic). You can also restrict to specific ports with the ports fieldto with the same selectors as ingress plus ports. Common patterns include allowing DNS egress (TCP/UDP port 53 to kube-system namespace) and restricting database Pods to only communicate with specific application PodspodSelector: {} with policyTypes: ["Ingress"] and no ingress rules. Default deny egress: same with policyTypes: ["Egress"]. Then create specific policies to allow only necessary traffic, following the principle of least privilegenamespaceSelector and podSelector in the same from/to entry act as an AND condition; in separate entries they act as OR.
medium: Memory for a tmpfs-backed volume stored in RAMDirectory, File, DirectoryOrCreate, and FileOrCreate. Useful for accessing node-level resources (Docker socket, system logs) but creates tight coupling to the node. Avoid in production; not suitable for multi-node schedulingreclaimPolicy controls what happens when the PVC is deleted: Retain (keep the PV), Delete (remove the PV and underlying storage), Recycle (deprecated)ReadWriteOnce (RWO, mounted read-write by a single node), ReadOnlyMany (ROX, mounted read-only by many nodes), ReadWriteMany (RWX, mounted read-write by many nodes). Not all storage backends support all modes — AWS EBS only supports RWO, while NFS supports all three. The access mode must match between the PV and PVC for bindingvolumes section, and mount it via volumeMounts. If dynamic provisioning is available, you only need a PVC — the PV is created automatically. Use kubectl get pv,pvc to verify binding status. A PVC stuck in Pending means no PV satisfies its requirements.
mysql-0, mysql-1, mysql-2). Each Pod maintains its identity across rescheduling. This is critical for databases, distributed caches, and clustered applications that require stable network identities[template-name]-[statefulset-name]-[ordinal]. When a Pod is rescheduled to a different node, it reattaches to the same PVC, preserving its data. PVCs are not automatically deleted when the StatefulSet is scaled downclusterIP: None) to provide DNS entries for each Pod. Each Pod gets a DNS record: [pod-name].[service-name].[namespace].svc.cluster.local. This allows applications to discover and connect to specific Pods by name, which is essential for leader election, replication, and cluster formation in distributed systemsRollingUpdate (default) updates Pods in reverse ordinal order. The partition parameter enables canary-style updates by only updating Pods with an ordinal greater than or equal to the partition value. OnDelete requires manual deletion of Pods to trigger the update, giving full control over the rollout processpod-0.service-name.namespace.svc.cluster.local) rather than the Service's cluster IP, which does not exist for headless Services.
--set key=value or -f custom-values.yaml. Templates reference values using Go template syntax: {{ .Values.replicaCount }}. This enables a single chart to deploy across development, staging, and production with different configurationshelm install [release-name] [chart] --namespace [ns]. Upgrade with helm upgrade [release-name] [chart] to apply new values or chart versions. Add --install to the upgrade command to create the release if it does not exist. Use --dry-run and --debug to preview rendered templates before applyinghelm rollback [release-name] [revision]. View revision history with helm history [release-name]. Helm stores release metadata (including all previous revisions) as Secrets in the release's namespace, enabling reliable rollback without requiring the original chart or valuesChart.yaml (metadata: name, version, appVersion), values.yaml (defaults), templates/ (Kubernetes manifest templates), charts/ (dependencies), and optionally templates/NOTES.txt (post-install instructions). Use helm template to render templates locally without installinghelm repo add, helm search repo, helm install, helm upgrade, helm rollback, helm list, and helm uninstall. Use helm show values [chart] to inspect available configuration options before installing.
kubectl create is imperative — it creates a resource and fails if it already exists. kubectl apply is declarative — it creates the resource if it does not exist or updates it if it does, using a three-way merge (last-applied annotation, live state, new config). Use apply for production workflows and GitOps; use create for quick one-off tasks on the examkustomization.yaml with resources (base manifests), namePrefix, commonLabels, patches, and configMapGenerator. Apply with kubectl apply -k ./ or kubectl kustomize ./ to previewpodDisruptionBudget to protect availability during voluntary disruptions (node drains), set terminationGracePeriodSeconds appropriately for clean shutdown, and use preStop lifecycle hooks to drain connections before the container is terminatedkubectl apply -k — it is useful for applying common labels or name prefixes to multiple resources. Practice the difference between kubectl apply and kubectl create, as the exam may require either approach depending on the task.
Challenge yourself with 60 CKAD practice questions — scenario-based, hands-on style, covering all 5 exam domains. Free.