CNCF

Kubernetes CKAD Certification Course

Master every domain of the Certified Kubernetes Application Developer exam. This course covers core concepts, configuration, multi-container pods, observability, pod design, services and networking, state persistence, and Helm-based application deployment with real-world examples and exam-aligned explanations.

Advanced 8 modules ~30 hours 60 practice questions
🎧

Study CKAD on the go with our IT certification podcast

Kubernetes pod design, deployment strategies, and application developer patterns — explained for your commute or workout. New episodes weekly.

Listen on Spotify

Course Modules

01
Core Concepts
3 lessons
Kubernetes Architecture

Key Concepts

  • Control Plane Components: The API Server is the central gateway for all cluster operations — every kubectl command, controller action, and scheduler decision passes through it as a RESTful request. etcd is the distributed key-value store that holds all cluster state and configuration data, acting as the single source of truth
  • Scheduler & Controller Manager: The kube-scheduler watches for newly created Pods with no assigned node and selects an optimal node based on resource requests, affinity rules, taints, and tolerations. The kube-controller-manager runs control loops (Deployment controller, ReplicaSet controller, Node controller, Job controller) that continuously reconcile desired state with actual state
  • Worker Node Components: The kubelet is the primary agent on each node — it receives PodSpecs from the API server, ensures containers are running and healthy, and reports node status. The kube-proxy maintains network rules (iptables or IPVS) on each node to enable Service-based communication across the cluster
  • Container Runtime: Kubernetes supports any OCI-compliant container runtime through the Container Runtime Interface (CRI). containerd and CRI-O are the most common runtimes in production; Docker was removed as a direct runtime in Kubernetes v1.24 but images built with Docker remain fully compatible
  • Cluster Communication Flow: A kubectl command hits the API server, which authenticates and authorizes the request, persists the desired state to etcd, and notifies the relevant controller. The scheduler assigns a node, and the kubelet on that node pulls the container image and starts the Pod
The CKAD exam is performance-based, so you will not be asked to install a cluster, but understanding the architecture helps you debug issues faster. If a Pod is stuck in Pending, the scheduler may not find a suitable node (check resource requests, taints, and node conditions). If a Pod is in CrashLoopBackOff, the kubelet is restarting a failing container. Use kubectl cluster-info and kubectl get nodes to verify cluster health during the exam.
Pods

Key Concepts

  • Pod Lifecycle: A Pod progresses through phases: Pending (scheduled but containers not yet running), Running (at least one container is running), Succeeded (all containers terminated with exit code 0), Failed (at least one container terminated with a non-zero exit code), Unknown (node communication lost). Understanding these phases is critical for debugging
  • Single-Container Pods: The most common pattern — one container per Pod. The Pod provides the container with a shared network namespace (localhost communication), storage volumes, and a unique cluster IP address. Containers within a Pod share the same IP and port space
  • Multi-Container Pods: Sidecar containers extend the main application (logging agents, proxies, TLS termination). Init containers run sequentially before app containers start, performing setup tasks like database migrations or configuration fetching. Ambassador containers proxy network connections to external services
  • Pod YAML Structure: A Pod manifest defines apiVersion: v1, kind: Pod, metadata (name, namespace, labels), and spec (containers array with name, image, ports, resources, env, volumeMounts). Master the imperative shortcut: kubectl run nginx --image=nginx --dry-run=client -o yaml to generate manifests quickly
  • Restart Policies: restartPolicy: Always (default, used by Deployments), OnFailure (used by Jobs, restarts only on non-zero exit), Never (container is not restarted). The kubelet implements exponential backoff for restart delays: 10s, 20s, 40s, up to 5 minutes
Speed is essential on the CKAD exam. Use imperative commands to generate YAML quickly, then edit as needed: kubectl run mypod --image=nginx --dry-run=client -o yaml > pod.yaml. Practice creating, inspecting, and deleting Pods until it is second nature. Use kubectl describe pod [name] to view events and diagnose startup failures, and kubectl get pod [name] -o yaml to see the full spec including default values injected by the API server.
Namespaces & API Primitives

Key Concepts

  • Namespaces: Provide logical isolation within a cluster. Default namespaces include default (where resources go if no namespace is specified), kube-system (control plane components), kube-public (publicly readable data), and kube-node-lease (node heartbeat leases). Create with kubectl create namespace [name]
  • Resource Scoping: Most resources (Pods, Deployments, Services, ConfigMaps) are namespace-scoped. Some resources (Nodes, PersistentVolumes, ClusterRoles, Namespaces) are cluster-scoped. Use kubectl api-resources --namespaced=true to list namespace-scoped resources
  • kubectl Basics: Essential commands for the exam: kubectl get (list resources), kubectl describe (detailed info), kubectl create (imperative creation), kubectl apply -f (declarative), kubectl delete (remove), kubectl edit (in-place edit), kubectl explain (API field docs). Set a default namespace with kubectl config set-context --current --namespace=[name]
  • YAML Manifests: Every Kubernetes resource follows the same top-level structure: apiVersion, kind, metadata, spec (desired state), and status (actual state, managed by Kubernetes). Use kubectl explain pod.spec.containers to explore the API schema during the exam
  • API Groups & Versions: Core resources use apiVersion: v1 (Pods, Services, ConfigMaps). Apps group uses apps/v1 (Deployments, StatefulSets, DaemonSets). Batch group uses batch/v1 (Jobs, CronJobs). Networking uses networking.k8s.io/v1 (Ingress, NetworkPolicy). Use kubectl api-versions to list available API versions
During the CKAD exam, you will work across multiple namespaces. Always check your current context with kubectl config get-contexts and switch namespaces as needed. A common mistake is creating a resource in the wrong namespace. Use the -n [namespace] flag explicitly or set the default namespace for your context. The kubectl explain command is your built-in documentation — use it liberally instead of guessing field names.
02
Configuration
3 lessons
ConfigMaps & Secrets

Key Concepts

  • ConfigMaps: Store non-sensitive configuration data as key-value pairs. Create imperatively with kubectl create configmap [name] --from-literal=key=value or --from-file=config.properties. ConfigMaps decouple configuration from container images, enabling environment-specific deployments without rebuilding images
  • Consuming ConfigMaps: Inject as environment variables using envFrom (all keys) or env.valueFrom.configMapKeyRef (specific keys). Mount as volume files using volumes and volumeMounts — each key becomes a file whose content is the value. Volume-mounted ConfigMaps are automatically updated when the ConfigMap changes (with a brief propagation delay)
  • Secrets: Store sensitive data (passwords, tokens, TLS certificates) as base64-encoded values. Created with kubectl create secret generic [name] --from-literal=password=mysecret. Types include Opaque (generic), kubernetes.io/tls (TLS cert and key), and kubernetes.io/dockerconfigjson (image pull credentials)
  • Consuming Secrets: Injected the same way as ConfigMaps — via environment variables (secretKeyRef) or volume mounts. When mounted as volumes, Secrets are stored in a tmpfs (in-memory filesystem) so they are never written to disk on the node. Prefer volume mounts over environment variables because env vars can leak through logs and crash dumps
  • Immutable ConfigMaps & Secrets: Setting immutable: true prevents accidental changes, improves cluster performance by reducing API server watch load, and protects critical configuration. Once marked immutable, the resource must be deleted and recreated to change it
ConfigMaps and Secrets are frequently tested on the CKAD. Practice both imperative creation and YAML-based definitions. Remember that Secrets are only base64-encoded, not encrypted — enable encryption at rest in etcd for production security. On the exam, use kubectl create configmap and kubectl create secret generic with --dry-run=client -o yaml to generate manifests quickly. Always verify injection with kubectl exec [pod] -- env or kubectl exec [pod] -- cat /path/to/mounted/file.
Resource Requirements & Limits

Key Concepts

  • Resource Requests: The minimum amount of CPU and memory guaranteed to a container. The scheduler uses requests to find a node with sufficient allocatable resources. Specified under spec.containers[].resources.requests with cpu (millicores, e.g., 250m = 0.25 CPU) and memory (bytes, e.g., 128Mi)
  • Resource Limits: The maximum amount of CPU and memory a container can consume. If a container exceeds its memory limit, the kubelet OOM-kills the process. If it exceeds its CPU limit, it is throttled but not terminated. Set under spec.containers[].resources.limits
  • Quality of Service (QoS) Classes: Guaranteed (requests = limits for all containers), Burstable (at least one container has requests or limits set but they differ), BestEffort (no requests or limits set). When a node is under memory pressure, BestEffort Pods are evicted first, then Burstable, then Guaranteed
  • LimitRanges: Namespace-level policies that define default, minimum, and maximum resource requests/limits for containers. If a Pod does not specify resources, the LimitRange injects the defaults. This prevents a single Pod from monopolizing cluster resources
  • ResourceQuotas: Namespace-level aggregate limits on total resource consumption (total CPU requests, total memory limits, number of Pods, number of Services). When a quota is in effect, every Pod must specify resource requests and limits or the creation is rejected
Resource management is critical for the CKAD. Always set both requests and limits for production workloads. A common exam pattern is to debug a Pod that cannot be scheduled — check if the node has enough allocatable resources by comparing kubectl describe node capacity vs. allocated amounts. Remember the units: 1 CPU = 1000m, memory uses Mi (mebibytes) or Gi (gibibytes). Setting requests too high wastes cluster resources; setting them too low causes eviction under pressure.
Security Contexts & ServiceAccounts

Key Concepts

  • Pod Security Context: Defines security settings for the entire Pod, including runAsUser (UID for all containers), runAsGroup (GID for all containers), fsGroup (GID applied to all mounted volumes, ensuring files are accessible by the specified group), and supplementalGroups for additional group memberships
  • Container Security Context: Overrides Pod-level settings for a specific container. Key fields include runAsNonRoot: true (rejects the container if it tries to run as root), readOnlyRootFilesystem: true (prevents writes to the container filesystem), allowPrivilegeEscalation: false (blocks setuid/setgid binaries), and privileged: false
  • Linux Capabilities: Fine-grained privileges that replace the all-or-nothing root model. Drop all capabilities with capabilities.drop: ["ALL"] and selectively add only what is needed, such as NET_BIND_SERVICE (bind to ports below 1024) or SYS_TIME. This follows the principle of least privilege
  • ServiceAccounts: Provide an identity for Pods to authenticate with the Kubernetes API. Each namespace has a default ServiceAccount. Create dedicated ServiceAccounts for workloads that need API access: kubectl create serviceaccount [name]. Assign to a Pod via spec.serviceAccountName
  • Token Management: In Kubernetes 1.24+, ServiceAccount tokens are no longer auto-mounted as Secrets. Instead, projected volumes provide short-lived, audience-scoped tokens via the TokenRequest API. Disable auto-mounting with automountServiceAccountToken: false on Pods that do not need API access to reduce the attack surface
Security contexts appear frequently on the CKAD exam. Common tasks include configuring a Pod to run as a specific user, preventing containers from running as root, and setting the filesystem to read-only. Always check whether the security settings should be at the Pod level (applies to all containers and volumes) or the container level (overrides for a specific container). For ServiceAccounts, the exam may ask you to create a ServiceAccount, assign it to a Pod, and verify it can access specific API resources.
03
Multi-Container Pods
2 lessons
Sidecar Pattern

Key Concepts

  • Sidecar Containers: Run alongside the main application container within the same Pod, sharing network namespace and storage volumes. Common use cases include log shipping agents (Fluentd/Filebeat reading logs from a shared volume), monitoring exporters (Prometheus sidecar exposing metrics), and service mesh proxies (Envoy handling mTLS and traffic routing)
  • Log Aggregation Pattern: The main container writes logs to a shared emptyDir volume at a known path. The sidecar container tails the log files and forwards them to a centralized logging system (Elasticsearch, Loki). This decouples log shipping from application code and allows different logging backends without changing the app
  • Proxy Sidecar: A sidecar container (such as Envoy or HAProxy) intercepts all inbound and outbound traffic for the main container. This enables TLS termination, circuit breaking, retries, and load balancing without modifying the application. Service meshes like Istio inject Envoy as a sidecar automatically
  • Sync Containers: A sidecar that periodically syncs data from an external source (Git repository, S3 bucket, shared filesystem) into a shared volume that the main container reads. The git-sync sidecar is a popular example, keeping a local copy of a repository up to date for web servers or configuration delivery
  • Shared Resources: All containers in a Pod share the same network namespace (communicate via localhost), the same IPC namespace, and any defined volumes. Each container has its own filesystem unless volumes are explicitly shared. Lifecycle coupling means the Pod terminates only when all containers have stopped
On the CKAD exam, you may be asked to add a sidecar container to an existing Pod definition. The key is configuring the shared volume correctly: define the volume (typically emptyDir: {}) at the Pod level, then add matching volumeMounts in both the main and sidecar containers pointing to their respective mount paths. Remember that sidecar containers run for the entire lifetime of the Pod, unlike init containers which run to completion before the app starts.
Init Containers, Ambassador & Adapter Patterns

Key Concepts

  • Init Containers: Defined under spec.initContainers, they run sequentially and must each complete successfully (exit code 0) before the next init container starts and before any app containers begin. If an init container fails, the kubelet restarts it according to the Pod's restart policy. They share volumes with app containers but have their own image and command
  • Init Container Use Cases: Wait for a dependent service to be available (until nslookup mydb; do sleep 2; done), run database schema migrations before the app starts, fetch configuration or secrets from a vault, generate certificates, or set filesystem permissions on shared volumes
  • Ambassador Pattern: A multi-container pattern where a sidecar acts as a proxy that simplifies how the main container connects to external services. The app connects to localhost, and the ambassador handles service discovery, connection pooling, and protocol translation. Example: a Redis ambassador that routes requests to the correct shard based on the key
  • Adapter Pattern: A sidecar that transforms the output of the main container into a standardized format. Example: the main container generates custom-format metrics, and the adapter container converts them into Prometheus-compatible format for scraping. This allows heterogeneous applications to expose a uniform monitoring interface
  • Choosing the Right Pattern: Use init containers for one-time setup tasks that must complete before the app runs. Use sidecars (including ambassador and adapter) for functionality that must run alongside the app for its entire lifecycle. In practice, many Pods combine init containers for bootstrapping with sidecar containers for ongoing support
Init containers are a common CKAD exam topic. The most typical scenario is adding an init container that waits for a Service or database to be ready. Remember that init containers are defined in spec.initContainers (not spec.containers) and run to completion in order. If you need to debug init container failures, use kubectl logs [pod] -c [init-container-name]. The ambassador and adapter patterns are tested conceptually — understand when to use each and how they differ from a basic sidecar.
04
Observability
3 lessons
Readiness, Liveness & Startup Probes

Key Concepts

  • Liveness Probes: Detect when a container is stuck or deadlocked and needs to be restarted. If the liveness probe fails failureThreshold consecutive times, the kubelet kills the container and restarts it according to the restart policy. Use for applications that can enter an unrecoverable state without crashing (infinite loops, deadlocks)
  • Readiness Probes: Determine whether a container is ready to accept traffic. If the readiness probe fails, the Pod's IP is removed from all Service endpoints until the probe succeeds again. The container is not restarted. Use for applications that need warm-up time (loading caches, establishing database connections) or that temporarily cannot handle requests
  • Startup Probes: Protect slow-starting containers from being killed by liveness probes before they finish initialization. The startup probe disables liveness and readiness checks until it succeeds. Configure with a generous failureThreshold * periodSeconds to allow sufficient startup time, then hand off to the liveness probe for ongoing health checking
  • Probe Mechanisms: httpGet (sends an HTTP GET request; success = 2xx/3xx response), tcpSocket (attempts a TCP connection to the specified port; success = connection established), exec (runs a command inside the container; success = exit code 0). Choose based on what the application exposes
  • Probe Configuration: initialDelaySeconds (wait before first probe), periodSeconds (interval between probes, default 10), timeoutSeconds (probe timeout, default 1), successThreshold (consecutive successes to mark healthy, default 1), failureThreshold (consecutive failures before action, default 3)
Probes are a high-priority CKAD topic. A common exam task is to add a liveness or readiness probe to a Pod spec. Know the YAML structure: probes are defined under spec.containers[].livenessProbe, readinessProbe, or startupProbe. A frequent mistake is setting initialDelaySeconds too low, causing the liveness probe to kill the container before it finishes starting. Use a startup probe for slow applications instead of inflating the liveness probe's initial delay.
Container Logging

Key Concepts

  • Standard Output Logging: Kubernetes captures everything a container writes to stdout and stderr. View logs with kubectl logs [pod-name]. For real-time streaming, add -f (follow). To see previous container logs after a restart, use --previous. Limit output with --tail=N or --since=1h
  • Multi-Container Log Selection: In a Pod with multiple containers, you must specify the container name: kubectl logs [pod-name] -c [container-name]. To view logs from all containers simultaneously, use --all-containers=true. This is essential for debugging sidecar and init container issues
  • Sidecar Logging Architecture: For applications that write logs to files instead of stdout, deploy a logging sidecar that tails the log files from a shared volume and streams them to stdout or forwards them directly to a log aggregation backend. This approach avoids modifying the application's logging behavior
  • Node-Level Log Rotation: Container logs are stored as JSON files on the node (typically under /var/log/containers/). The kubelet or container runtime handles log rotation based on file size and count. Cluster-level logging requires deploying a DaemonSet (Fluentd, Fluent Bit) or a sidecar-based approach
  • Application Log Levels: Configure applications to use structured logging (JSON format) with appropriate log levels (DEBUG, INFO, WARN, ERROR). Environment variables or ConfigMaps can control the log level without redeploying. Structured logs are easier to parse and filter in centralized logging systems like the ELK stack or Loki
On the CKAD exam, you will frequently use kubectl logs to debug failing containers. Practice the flags: -c for multi-container Pods, --previous to see logs from crashed containers, and -f for live streaming. If the question asks why a container keeps restarting, the first step is always kubectl logs [pod] --previous to see the output from the last crashed instance. Combine with kubectl describe pod to read events and exit codes.
Monitoring & Debugging

Key Concepts

  • kubectl top: Displays real-time CPU and memory usage for nodes (kubectl top node) and Pods (kubectl top pod). Requires the Metrics Server to be installed in the cluster. Add --containers to see per-container resource usage within a Pod. Use this to identify resource-hungry Pods and validate that requests/limits are sized correctly
  • kubectl describe: Provides a comprehensive view of any resource, including its spec, status, conditions, and recent events. kubectl describe pod [name] shows scheduling decisions, pull status, probe results, and restart reasons. Events at the bottom are chronologically ordered and are the first place to check when a Pod is not behaving as expected
  • Events: Kubernetes emits events for state changes — Pod scheduled, image pulled, container started, probe failed, node not ready. View cluster-wide events with kubectl get events --sort-by=.metadata.creationTimestamp. Events are stored for one hour by default. They reveal the sequence of operations that led to the current state
  • Ephemeral Containers: Temporary containers injected into a running Pod for debugging, especially useful when the Pod's containers lack debugging tools (distroless images). Created with kubectl debug -it [pod-name] --image=busybox --target=[container-name]. Shares the process namespace of the target container, allowing you to inspect processes and the filesystem
  • Common Debugging Workflow: Check Pod status with kubectl get pod, examine events and conditions with kubectl describe pod, review container logs with kubectl logs, verify resource usage with kubectl top pod, and if needed, shell into the container with kubectl exec -it [pod] -- /bin/sh or use ephemeral containers for minimal images
Debugging skills are paramount for the CKAD. Memorize this sequence for any Pod issue: (1) kubectl get pod to check status and restarts, (2) kubectl describe pod to read events and conditions, (3) kubectl logs to view application output, (4) kubectl exec to inspect the running container. The kubectl debug command with ephemeral containers is a powerful tool when the base image has no shell. Practice these commands until they are muscle memory.
05
Pod Design
3 lessons
Labels, Selectors & Annotations

Key Concepts

  • Labels: Key-value pairs attached to Kubernetes objects for identification and grouping. Examples: app: frontend, env: production, tier: web. Labels are used by Services, Deployments, ReplicaSets, and NetworkPolicies to select which Pods to target. Add labels to a running resource with kubectl label pod [name] key=value
  • Equality-Based Selectors: Match labels using =, ==, or != operators. Example: kubectl get pods -l env=production returns all Pods with the label env: production. Multiple requirements are ANDed together: -l env=production,tier=frontend matches both conditions
  • Set-Based Selectors: More expressive than equality-based. Operators include in (env in (production, staging)), notin (env notin (development)), and exists/!exists (kubectl get pods -l 'release' returns Pods that have the release label regardless of value). Used in Deployments with matchExpressions
  • Recommended Labels: The Kubernetes convention defines standard labels: app.kubernetes.io/name (app name), app.kubernetes.io/instance (instance identifier), app.kubernetes.io/version (version), app.kubernetes.io/component (role within architecture), app.kubernetes.io/managed-by (tool managing the resource). These enable consistent tooling integration
  • Annotations: Key-value metadata that is not used for selection but stores arbitrary non-identifying information. Examples: build timestamps, git commit hashes, release notes, configuration hints for ingress controllers (nginx.ingress.kubernetes.io/rewrite-target: /), and tool-specific metadata. Annotations can hold larger values than labels
Labels and selectors are fundamental to Kubernetes and heavily tested on the CKAD. Many exam questions involve creating a Deployment or Service that selects Pods via labels — if the selector does not match the Pod's labels, the resource will not work. Use kubectl get pods --show-labels to verify labels and kubectl label pod [name] key=value --overwrite to modify them. Remember: labels are for Kubernetes to select objects, annotations are for humans and tools to store metadata.
Deployments & Rolling Updates

Key Concepts

  • Deployments: The standard controller for managing stateless applications. A Deployment creates and manages ReplicaSets, which in turn manage Pods. Deployments provide declarative updates, rollback capabilities, and scaling. Create with kubectl create deployment [name] --image=[image] --replicas=3
  • RollingUpdate Strategy: The default strategy that replaces Pods incrementally. maxSurge (how many Pods can exist above the desired count during the update, default 25%) and maxUnavailable (how many Pods can be unavailable during the update, default 25%). This ensures zero-downtime deployments by keeping a minimum number of Pods always available
  • Recreate Strategy: Terminates all existing Pods before creating new ones. Results in downtime but guarantees that the old and new versions never run simultaneously. Use when the application cannot tolerate two versions running at the same time (incompatible database schemas, singleton services)
  • Rollback with kubectl rollout: View rollout status with kubectl rollout status deployment/[name]. See revision history with kubectl rollout history deployment/[name]. Roll back to the previous revision with kubectl rollout undo deployment/[name] or to a specific revision with --to-revision=N. Pause and resume rollouts with kubectl rollout pause/resume
  • Scaling & Updates: Scale with kubectl scale deployment [name] --replicas=5. Update the image with kubectl set image deployment/[name] [container]=[new-image]. Every change to the Pod template creates a new ReplicaSet and triggers a rollout. Use kubectl rollout history with --revision=N to inspect what changed in each revision
Deployments are the most heavily tested resource on the CKAD. Practice creating Deployments imperatively, updating images, checking rollout status, and performing rollbacks. A key exam pattern is to update a Deployment's image to a non-existent version, observe the failing rollout, and then roll back. Remember that the revisionHistoryLimit field (default 10) controls how many old ReplicaSets are retained. Use kubectl rollout history deployment/[name] --revision=2 to see the exact Pod template for a specific revision.
Jobs & CronJobs

Key Concepts

  • Jobs: Run one or more Pods to completion. The Job controller ensures the specified number of successful completions. Key fields: completions (total successful runs needed, default 1), parallelism (how many Pods run concurrently, default 1), backoffLimit (maximum retries before marking the Job as failed, default 6). The restart policy must be Never or OnFailure
  • activeDeadlineSeconds: A hard timeout for the entire Job. If the Job runs longer than this duration, all running Pods are terminated and the Job is marked as failed with reason DeadlineExceeded. This prevents runaway Jobs from consuming cluster resources indefinitely
  • CronJobs: Create Jobs on a recurring schedule using standard cron syntax. The schedule field uses five fields: minute (0-59), hour (0-23), day of month (1-31), month (1-12), day of week (0-6). Example: "0 2 * * *" runs daily at 2 AM. CronJobs manage Job creation and cleanup based on successfulJobsHistoryLimit (default 3) and failedJobsHistoryLimit (default 1)
  • CronJob Concurrency Policy: concurrencyPolicy: Allow (default, multiple Jobs can run simultaneously), Forbid (skip the new Job if the previous one is still running), Replace (cancel the running Job and start a new one). Choose based on whether overlapping executions are safe for your workload
  • Job Patterns: Work queue: set parallelism to N with completions unset for Pods that pull from a shared queue. Indexed Jobs: each Pod gets a unique index (0 to completions-1) via the JOB_COMPLETION_INDEX env var. Use ttlSecondsAfterFinished to automatically clean up completed Jobs and their Pods
Jobs and CronJobs are frequently tested on the CKAD. Create Jobs quickly with kubectl create job [name] --image=[image] -- [command] and CronJobs with kubectl create cronjob [name] --image=[image] --schedule="*/5 * * * *" -- [command]. A common pitfall is forgetting to set restartPolicy: Never or OnFailure — Jobs cannot use the default Always policy. Verify CronJob schedules by checking the next scheduled run time in kubectl get cronjob.
06
Services & Networking
3 lessons
Services

Key Concepts

  • ClusterIP (Default): Exposes the Service on an internal cluster IP. Only reachable from within the cluster. Provides stable DNS ([service-name].[namespace].svc.cluster.local) and load balancing across matching Pods. Create with kubectl expose deployment [name] --port=80 --target-port=8080
  • NodePort: Extends ClusterIP by opening a static port (30000-32767) on every node in the cluster. External traffic can reach the Service via [node-ip]:[node-port]. Useful for development and on-premises environments where a cloud load balancer is not available
  • LoadBalancer: Extends NodePort by provisioning a cloud provider load balancer (AWS ELB, GCP LB, Azure LB) that routes external traffic to the NodePort. The external IP is assigned by the cloud provider. This is the standard way to expose production services to the Internet in cloud environments
  • ExternalName & Headless Services: ExternalName maps a Service to an external DNS name (e.g., externalName: my-database.example.com) without proxying. Headless Services (clusterIP: None) do not allocate a cluster IP; instead, DNS returns the Pod IPs directly. Used by StatefulSets for stable DNS per Pod
  • Endpoints: A Service automatically creates an Endpoints object that lists the IP addresses of all Pods matching its selector. View with kubectl get endpoints [service-name]. If a Service has no selector, you can manually create an Endpoints object to point to external IPs or services outside the cluster
Services are a core CKAD topic. The fastest way to create a Service on the exam is kubectl expose with the correct port and target-port. Remember that port is what the Service listens on and targetPort is what the container listens on — they can be different. If a Service is not reaching its Pods, verify that the selector labels match the Pod labels using kubectl describe svc [name] and check the Endpoints list. An empty Endpoints list means no Pods match the selector.
Ingress

Key Concepts

  • Ingress Controllers: An Ingress resource alone does nothing — it requires an Ingress controller (NGINX, Traefik, HAProxy, AWS ALB) to be installed in the cluster. The controller watches for Ingress objects and configures the underlying load balancer/proxy accordingly. NGINX Ingress Controller is the most common in CKAD exam environments
  • Ingress Rules: Define routing based on hostname and path. A single Ingress can route app.example.com/api to the api-service and app.example.com/web to the web-service. The defaultBackend handles requests that do not match any rule. Each rule specifies a host, path, pathType, and the backend service name and port
  • Path Types: Prefix matches URL paths based on a prefix split by / (e.g., /api matches /api, /api/, and /api/users). Exact matches the exact URL path only. ImplementationSpecific defers to the ingress controller's own matching logic. Always specify pathType as it is required in networking.k8s.io/v1
  • TLS Termination: Ingress can terminate TLS/SSL using a Kubernetes Secret containing the certificate and private key. Configure via the tls section with the hostname and the Secret name. The Secret must be of type kubernetes.io/tls with tls.crt and tls.key data fields. Traffic between the Ingress controller and backend Services is typically unencrypted (within the cluster)
  • Annotations for Controller-Specific Behavior: Annotations customize Ingress controller behavior. Common NGINX examples: nginx.ingress.kubernetes.io/rewrite-target: / (rewrite the URL path), nginx.ingress.kubernetes.io/ssl-redirect: "true" (force HTTPS), nginx.ingress.kubernetes.io/proxy-body-size: "10m" (max request body). Annotations vary by controller
Ingress questions on the CKAD require you to write Ingress YAML from scratch or modify existing rules. Memorize the basic structure: apiVersion: networking.k8s.io/v1, kind: Ingress, with rules containing host, http.paths with path, pathType, and backend.service.name/port.number. Use kubectl explain ingress.spec.rules during the exam to check field names. A frequent mistake is putting the port name instead of the port number in the backend.
Network Policies

Key Concepts

  • Default Behavior: By default, all Pods in a Kubernetes cluster can communicate with all other Pods without restriction. Network Policies act as Pod-level firewalls, restricting ingress (incoming) and egress (outgoing) traffic. They require a CNI plugin that supports NetworkPolicy enforcement (Calico, Cilium, Weave Net; Flannel does not)
  • podSelector: Defines which Pods the NetworkPolicy applies to. An empty selector (podSelector: {}) selects all Pods in the namespace. Example: podSelector: { matchLabels: { app: db } } applies the policy only to Pods with the app: db label. Once a Pod is selected by any NetworkPolicy, all traffic not explicitly allowed is denied
  • Ingress Rules: Define which sources can send traffic to the selected Pods. Sources are specified using from with combinations of podSelector (Pods in the same namespace), namespaceSelector (Pods in specific namespaces), and ipBlock (CIDR ranges for external traffic). You can also restrict to specific ports with the ports field
  • Egress Rules: Define which destinations the selected Pods can send traffic to. Specified using to with the same selectors as ingress plus ports. Common patterns include allowing DNS egress (TCP/UDP port 53 to kube-system namespace) and restricting database Pods to only communicate with specific application Pods
  • Default Deny Policies: A best practice is to create default deny policies for each namespace. Default deny ingress: podSelector: {} with policyTypes: ["Ingress"] and no ingress rules. Default deny egress: same with policyTypes: ["Egress"]. Then create specific policies to allow only necessary traffic, following the principle of least privilege
Network Policies are a critical CKAD exam topic. The most important thing to understand is that policies are additive — if multiple policies select the same Pod, the union of all their rules applies. A common exam task is to create a default deny policy and then allow specific traffic. Pay attention to whether the question asks for ingress, egress, or both. Remember that a namespaceSelector and podSelector in the same from/to entry act as an AND condition; in separate entries they act as OR.
07
State Persistence
2 lessons
Volumes

Key Concepts

  • emptyDir: A temporary volume created when a Pod is assigned to a node and deleted when the Pod is removed. All containers in the Pod can read and write to it. Use for scratch space, caches, and sharing data between containers (e.g., main container writes logs, sidecar reads them). Specify medium: Memory for a tmpfs-backed volume stored in RAM
  • hostPath: Mounts a file or directory from the host node's filesystem into the Pod. Types include Directory, File, DirectoryOrCreate, and FileOrCreate. Useful for accessing node-level resources (Docker socket, system logs) but creates tight coupling to the node. Avoid in production; not suitable for multi-node scheduling
  • PersistentVolume (PV) & PersistentVolumeClaim (PVC): PVs are cluster-scoped storage resources provisioned by admins or dynamically via StorageClasses. PVCs are namespace-scoped requests for storage. A PVC binds to a PV that satisfies its size and access mode requirements. This abstraction decouples storage provisioning from consumption
  • StorageClasses: Define storage tiers with different performance characteristics and provisioning backends (AWS EBS, GCE PD, NFS, Ceph). When a PVC references a StorageClass, the cluster dynamically provisions a PV. The reclaimPolicy controls what happens when the PVC is deleted: Retain (keep the PV), Delete (remove the PV and underlying storage), Recycle (deprecated)
  • Access Modes: ReadWriteOnce (RWO, mounted read-write by a single node), ReadOnlyMany (ROX, mounted read-only by many nodes), ReadWriteMany (RWX, mounted read-write by many nodes). Not all storage backends support all modes — AWS EBS only supports RWO, while NFS supports all three. The access mode must match between the PV and PVC for binding
Volume questions on the CKAD typically involve creating a PVC, binding it to a Pod, and verifying that data persists across Pod restarts. Practice the workflow: create a PVC with the desired storage size and access mode, reference it in the Pod's volumes section, and mount it via volumeMounts. If dynamic provisioning is available, you only need a PVC — the PV is created automatically. Use kubectl get pv,pvc to verify binding status. A PVC stuck in Pending means no PV satisfies its requirements.
StatefulSets

Key Concepts

  • Stable Pod Identity: Unlike Deployments where Pods get random names, StatefulSet Pods have stable, predictable names based on the StatefulSet name and an ordinal index (e.g., mysql-0, mysql-1, mysql-2). Each Pod maintains its identity across rescheduling. This is critical for databases, distributed caches, and clustered applications that require stable network identities
  • Ordered Deployment & Scaling: Pods are created sequentially from index 0 to N-1, and each must be Running and Ready before the next is created. Scaling down proceeds in reverse order (highest index first). This ordered guarantee ensures primaries start before replicas and data replication is established before additional members join
  • volumeClaimTemplates: Define PVC templates that create a unique PersistentVolumeClaim for each Pod in the StatefulSet. The PVC name follows the pattern [template-name]-[statefulset-name]-[ordinal]. When a Pod is rescheduled to a different node, it reattaches to the same PVC, preserving its data. PVCs are not automatically deleted when the StatefulSet is scaled down
  • Headless Services for Stable DNS: A StatefulSet requires a headless Service (clusterIP: None) to provide DNS entries for each Pod. Each Pod gets a DNS record: [pod-name].[service-name].[namespace].svc.cluster.local. This allows applications to discover and connect to specific Pods by name, which is essential for leader election, replication, and cluster formation in distributed systems
  • Update Strategies: RollingUpdate (default) updates Pods in reverse ordinal order. The partition parameter enables canary-style updates by only updating Pods with an ordinal greater than or equal to the partition value. OnDelete requires manual deletion of Pods to trigger the update, giving full control over the rollout process
StatefulSets are tested on the CKAD in the context of stateful application deployment. The key exam tasks include creating a StatefulSet with a headless Service and volumeClaimTemplates. Remember the three guarantees: stable unique network identity, stable persistent storage, and ordered graceful deployment and scaling. If you need to access a specific Pod in a StatefulSet, use its DNS name (pod-0.service-name.namespace.svc.cluster.local) rather than the Service's cluster IP, which does not exist for headless Services.
08
Helm & Application Deployment
2 lessons
Helm Basics

Key Concepts

  • Charts, Releases & Repositories: A Helm chart is a package of pre-configured Kubernetes resources (Deployments, Services, ConfigMaps) with templated YAML. A release is a specific instance of a chart installed in the cluster with a given name and set of values. Chart repositories (Artifact Hub, Bitnami) host versioned charts for discovery and sharing
  • values.yaml: The default configuration file for a chart. Users override values at install time with --set key=value or -f custom-values.yaml. Templates reference values using Go template syntax: {{ .Values.replicaCount }}. This enables a single chart to deploy across development, staging, and production with different configurations
  • helm install & upgrade: Install a chart with helm install [release-name] [chart] --namespace [ns]. Upgrade with helm upgrade [release-name] [chart] to apply new values or chart versions. Add --install to the upgrade command to create the release if it does not exist. Use --dry-run and --debug to preview rendered templates before applying
  • helm rollback: Roll back to a previous revision with helm rollback [release-name] [revision]. View revision history with helm history [release-name]. Helm stores release metadata (including all previous revisions) as Secrets in the release's namespace, enabling reliable rollback without requiring the original chart or values
  • Chart Structure: A chart directory contains Chart.yaml (metadata: name, version, appVersion), values.yaml (defaults), templates/ (Kubernetes manifest templates), charts/ (dependencies), and optionally templates/NOTES.txt (post-install instructions). Use helm template to render templates locally without installing
Helm was added to the CKAD curriculum to test application deployment skills. On the exam, you may need to install a chart, override values, upgrade a release, or roll back to a previous version. Practice the core commands: helm repo add, helm search repo, helm install, helm upgrade, helm rollback, helm list, and helm uninstall. Use helm show values [chart] to inspect available configuration options before installing.
Advanced Deployment Patterns

Key Concepts

  • Blue-Green Deployment: Run two identical environments (blue = current, green = new version). After testing the green environment, switch traffic by updating the Service selector to point to green Pods. Rollback is instant — switch the selector back to blue. This requires double the resources during the transition but provides zero-downtime deployments with full rollback capability
  • Canary Deployment: Route a small percentage of traffic to the new version (canary) while the majority continues to use the stable version. In Kubernetes, this can be achieved by running two Deployments with the same label selector on the Service but different replica counts (e.g., 9 stable + 1 canary = 10% canary traffic). Gradually increase the canary replicas as confidence grows
  • kubectl apply vs create: kubectl create is imperative — it creates a resource and fails if it already exists. kubectl apply is declarative — it creates the resource if it does not exist or updates it if it does, using a three-way merge (last-applied annotation, live state, new config). Use apply for production workflows and GitOps; use create for quick one-off tasks on the exam
  • Kustomize Basics: A built-in Kubernetes configuration management tool (no Helm dependency) that uses overlays to customize base YAML manifests without templating. Define a kustomization.yaml with resources (base manifests), namePrefix, commonLabels, patches, and configMapGenerator. Apply with kubectl apply -k ./ or kubectl kustomize ./ to preview
  • Deployment Best Practices: Always define resource requests and limits, configure health probes, use podDisruptionBudget to protect availability during voluntary disruptions (node drains), set terminationGracePeriodSeconds appropriately for clean shutdown, and use preStop lifecycle hooks to drain connections before the container is terminated
The CKAD tests your ability to implement different deployment strategies. For blue-green, understand that you maintain two separate Deployments and switch the Service selector. For canary, know how to run two Deployments with a shared label that the Service selects. Kustomize is available in the exam environment via kubectl apply -k — it is useful for applying common labels or name prefixes to multiple resources. Practice the difference between kubectl apply and kubectl create, as the exam may require either approach depending on the task.

Ready to test your knowledge?

Challenge yourself with 60 CKAD practice questions — scenario-based, hands-on style, covering all 5 exam domains. Free.

← Back to all courses