Master every domain of the CNCF Certified Kubernetes Administrator exam. This course covers the full Kubernetes architecture, installing and configuring clusters with kubeadm, managing workloads and scheduling, services and networking, persistent storage, role-based access control, security contexts, and systematic troubleshooting — with real kubectl commands, YAML examples, and exam-aligned explanations throughout.
Tune in to Kubernetes tips, cluster architecture breakdowns, and CKA exam strategies while commuting or working out. New episodes weekly.
Listen on Spotifykube-system namespace — you can inspect them with kubectl get pods -n kube-system. Their manifests live in /etc/kubernetes/manifests/.kubectl describe node <node-name>systemctl status kubelet/var/lib/kubelet/config.yamljournalctl -u kubelet -n 50.systemctl status kubelet and journalctl -u kubelet first.kubectl [command] [TYPE] [NAME] [flags]kubectl get pods — list Pods in current namespacekubectl get pods -A — list Pods across all namespaceskubectl describe pod <name> — detailed info including Eventskubectl delete pod <name> --force --grace-period=0 — immediate deletionkubectl get pod <name> -o yaml — output full resource spec as YAMLkubectl explain pod.spec.containers — inline API documentationkubectl run nginx --image=nginx --restart=Never — create a Podkubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml — generate YAML without creatingkubectl create deployment app --image=nginx --replicas=3 — create a Deploymentkubectl expose deployment app --port=80 --type=ClusterIP — create a Servicekubectl set image deployment/app nginx=nginx:1.25 — update container imagedefault, kube-system, kube-public, kube-node-lease--dry-run=client -o yaml to generate resource templates quickly instead of writing YAML from scratch. This saves enormous time.kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=<IP> — initialize control planemkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/configkubectl apply -f https://docs.projectcalico.org/manifests/calico.yamlkubeadm token create --print-join-commandkubeadm join <apiserver-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>kubeadm token createkubectl get nodesETCDCTL_API=3 etcdctl snapshot save /backup/etcd-snapshot.db --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.keyETCDCTL_API=3 etcdctl snapshot status /backup/etcd-snapshot.dbETCDCTL_API=3 etcdctl snapshot restore /backup/etcd-snapshot.db --data-dir=/var/lib/etcd-restore, then update the etcd static Pod manifest to point to the new data directoryetcdctl snapshot save command with all three certificate flags. The certs are always in /etc/kubernetes/pki/etcd/.ETCDCTL_API=3 before running etcdctl commands. Without it, etcdctl defaults to v2 API and the commands will be different or fail entirely.apt-mark unhold kubeadm && apt-get install -y kubeadm=1.29.0-00 && apt-mark hold kubeadmkubeadm upgrade plankubeadm upgrade apply v1.29.0kubectl drain <node> --ignore-daemonsetsapt-get install -y kubelet=1.29.0-00 kubectl=1.29.0-00, then systemctl daemon-reload && systemctl restart kubeletkubectl uncordon <node>~/.kube/config — can be overridden with KUBECONFIG env varkubectl config get-contextskubectl config use-context <context-name>kubectl config current-contextkubectl config set-context --current --namespace=devKUBECONFIG=~/.kube/config:~/.kube/prod-config kubectl config view --merge --flatten > ~/.kube/merged-configkubectl config current-context before running commands. Switching to the wrong context is a common exam mistake.kubectl create role pod-reader --verb=get,list,watch --resource=pods -n devkubectl create rolebinding dev-binding --role=pod-reader --user=jane -n devkubectl auth can-i get pods --as=jane -n devkubectl auth can-i --list --as=jane -n devopenssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -nooutkubeadm certs check-expirationkubeadm certs renew allopenssl genrsa -out jane.key 2048, then openssl req -new -key jane.key -subj "/CN=jane/O=dev-team" -out jane.csrCertificateSigningRequest object with the base64-encoded CSRkubectl certificate approve <csr-name>kubectl auth can-i to verify your RBAC rules work before submitting your exam answer. A quick test like kubectl auth can-i create deployments --as=system:serviceaccount:dev:mysa -n dev confirms permissions instantly.maxSurge and maxUnavailablekubectl set image deployment/webapp nginx=nginx:1.25 --recordkubectl rollout status deployment/webappkubectl rollout history deployment/webappkubectl rollout undo deployment/webappkubectl rollout undo deployment/webapp --to-revision=2kubectl scale deployment webapp --replicas=5kubectl autoscale deployment webapp --min=2 --max=10 --cpu-percent=70kubectl rollout pause deployment/webapp / kubectl rollout resume deployment/webapppod-0, pod-1) and persistent volume claims per Pod; ordered deployment and scalingcompletions and parallelism to control batch executionschedule: "*/5 * * * *" runs every 5 minutesresources: {requests: {cpu: "250m", memory: "128Mi"}, limits: {cpu: "500m", memory: "256Mi"}}kubectl create -f limitrange.yaml, inspect: kubectl describe limitrange -n devkubectl describe resourcequota -n devkubectl taint nodes node1 key=value:NoSchedulekubectl taint nodes node1 key=value:NoSchedule- (append minus)kubectl label nodes node1 disktype=ssdpriorityClassNamesystem-cluster-critical and system-node-critical for core Kubernetes components<NodeIP>:<NodePort>clusterIP: None; returns Pod IPs directly from DNS; used with StatefulSets for direct Pod addressingkube-system; every Pod gets its DNS configured to use the CoreDNS ClusterIP<service-name>.<namespace>.svc.cluster.local<service-name> resolves correctly<pod-ip-dashes>.<namespace>.pod.cluster.localkubectl run tmp --image=busybox --restart=Never -- nslookup kubernetes.defaultkubectl get configmap coredns -n kube-system -o yaml. If DNS resolution fails in a Pod, check that CoreDNS Pods are running and that the Pod's /etc/resolv.conf points to the CoreDNS ClusterIP.kubectl get endpoints <service-name> — an empty Endpoints list confirms a selector mismatch.app.example.com to one service, api.example.com to another/api to backend-service, / to frontend-servicekubectl create ingress my-ingress --rule="app.example.com/=webapp:80" --rule="app.example.com/api=api-svc:8080"tls.crt and tls.key in the Ingress spec under tls:kubectl describe ingress my-ingress shows the routing rules and backend healthingressClassName or the kubernetes.io/ingress.class annotation to tell Kubernetes which Ingress Controller should handle the resource. Without it, no controller may pick it up.podSelector, namespaceSelector, and ipBlockingress: []app=backend Pods only from app=frontend Pods: set podSelector: {matchLabels: {app: backend}} and ingress from: [{podSelector: {matchLabels: {app: frontend}}}]kube-dns namespacekubectl exec -it <pod> -- nc -zv <service> <port>storageClassName or label selectorskubectl create -f pvc.yaml; check binding: kubectl get pvc — status should show Boundvolumes section, then volumeMounts in container specPending state means no suitable PV was found. Check: (1) Does a PV exist with matching access mode and sufficient size? (2) Is the StorageClass name correct? (3) Does the provisioner have enough capacity or permissions?kubectl get pv and kubectl get pvc to see the CAPACITY, ACCESS MODES, RECLAIM POLICY, and STATUS columns. An RWX claim will never bind to an RWO PV — access mode must match.emptyDir: {medium: Memory} — faster but counts against container memory limitsDirectory, File, Socket, DirectoryOrCreate, FileOrCreatenfs: {server: <nfs-server-ip>, path: /exports/data}pod-security.kubernetes.io/enforce: restrictedrunAsNonRoot: true — prevent containers from running as rootrunAsUser: 1000 — set the UID for the container processreadOnlyRootFilesystem: true — make the container's root filesystem read-onlyallowPrivilegeEscalation: false — prevents processes from gaining more privileges than their parentcapabilities: {drop: ["ALL"], add: ["NET_BIND_SERVICE"]} — drop all Linux capabilities and add only what's neededsecurityContext applies to all containers; container-level overrides pod-levelrunAsNonRoot, runAsUser, readOnlyRootFilesystem, allowPrivilegeEscalation, capabilities. Use kubectl explain pod.spec.securityContext during the exam.default SA in the namespace)/var/run/secrets/kubernetes.io/serviceaccount/token — used by applications to authenticate to the API serverkubectl create serviceaccount my-app-sa -n productionautomountServiceAccountToken: false in Pod or SA specTokenRequest API or create a Secret manually with type: kubernetes.io/service-account-tokencluster-admin to application ServiceAccountskubectl get rolebindings,clusterrolebindings -A -o widekubectl auth can-i --list --as=system:serviceaccount:<ns>:<sa> to audit effective permissionsserviceAccountName in Pod spec.tls.crt and tls.keykubectl create secret generic db-creds --from-literal=password=s3cr3tkubectl create secret tls my-tls --cert=cert.pem --key=key.pemvalueFrom: {secretKeyRef: {name: db-creds, key: password}}kubernetes.io/dockerconfigjson Secret, then reference it with imagePullSecrets in Pod speckubectl patch sa default -p '{"imagePullSecrets": [{"name": "registry-creds"}]}' — all Pods using that SA will auto-use the pull secret:latest) in production — :latest with imagePullPolicy: Always re-pulls on every restartEncryptionConfiguration object and referencing it in the kube-apiserver manifest with --encryption-provider-config.kubectl get nodes — look for NotReady or Unknown statuskubectl describe node <node-name> — check Conditions section for MemoryPressure, DiskPressure, NetworkUnavailablesystemctl status kubelet — is it running? Any error messages?journalctl -u kubelet -n 100 --no-pager — look for certificate errors, container runtime failures, or configuration issuessystemctl status containerd or systemctl status cri-odf -h — a full disk stops kubelet from running new Podskubectl cordon <node> — marks node as unschedulable; existing Pods continue runningkubectl drain <node> --ignore-daemonsets --delete-emptydir-data — evicts all Pods and cordons the node; use for maintenancekubectl uncordon <node> — marks node schedulable again after maintenancesystemctl start kubelet && systemctl enable kubelet.kubectl describe pod for: insufficient resources, taint/toleration mismatch, PVC not bound, image pull issue at schedulingkubectl logs <pod> --previouskubectl delete pod <pod> --force --grace-period=0kubectl describe pod <pod> — full details: events, conditions, container statuses, volume mountskubectl get events --sort-by=.lastTimestamp -n <namespace> — chronological event streamkubectl logs <pod> -c <container> — logs from a specific container in a multi-container Podkubectl logs <pod> --previous — logs from the previous (crashed) container instancekubectl get pod <pod> -o jsonpath='{.status.containerStatuses[0].state}' — precise state in JSONkubectl logs --previous first — the crashed container's last logs usually contain the root cause (missing env var, misconfigured mount, application startup error).kubectl exec -it <pod> -- /bin/shkubectl exec <pod> -- nc -zv <target-service> <port>kubectl exec <pod> -- nslookup <service-name>.<namespace>.svc.cluster.localkubectl run debug --image=nicolaka/netshoot --rm -it --restart=Never -- /bin/bash — netshoot includes curl, nslookup, netstat, tcpdump, and morekubectl get endpoints <service-name> — empty means no matching Podskubectl get configmap kube-proxy -n kube-system -o yamlkubectl get pods -n kube-system | grep corednskubectl get ds kube-proxy -n kube-systemkubectl run debug --image=nicolaka/netshoot --rm -it --restart=Never -- bash to get an instant debug environment.ETCDCTL_API=3 etcdctl endpoint health --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.keyETCDCTL_API=3 etcdctl member list --endpoints=... --cacert=... --cert=... --key=...kubectl commands will failcat /etc/kubernetes/manifests/etcd.yaml — verify data directory and cert paths are correct/etc/kubernetes/manifests/ — the kubelet monitors these files and restarts Pods on changeskubectl logs kube-apiserver-<node> -n kube-system or crictl logs <container-id> if kubectl is unavailablecrictl ps (list containers), crictl logs <id> (container logs), crictl inspect <id>--audit-log-path and --audit-policy-file flags in the kube-apiserver manifestkubectl doesn't work, use crictl directly on the node. It talks to the container runtime and can inspect running containers, pull logs, and check container health without going through the API server. Essential for recovering a broken control plane./etc/kubernetes/manifests/ and compare against a known-good reference.Challenge yourself with 60 Certified Kubernetes Administrator practice questions — scenario-based, hands-on style, and free.