CompTIA

CompTIA Linux+ (XK0-005)

Master every domain of the CompTIA Linux+ XK0-005 exam. This course covers Linux foundations and the boot process, package management across distro families, user and permission management, storage and LVM, networking with modern tools, security hardening with SELinux and AppArmor, bash scripting, containers, and real-world troubleshooting workflows — with precise command syntax and exam-aligned explanations throughout.

Advanced 7 modules ~35 hours 60 practice questions
🎧

Study on the go with our IT certification podcast

Tune in to Linux tips, security hardening walkthroughs, and exam strategies while commuting or working out. New episodes weekly.

Listen on Spotify

Course Modules

01
Linux Foundations & Boot Process
5 lessons · ~4 hours
Linux Distributions & Architecture

Kernel, Shell, Userspace & FHS

  • The Linux kernel manages hardware resources: CPU scheduling, memory management, device drivers, and system calls
  • The shell (bash, zsh, sh) is the user-facing interpreter that parses commands and communicates with the kernel via system calls
  • Userspace contains all processes running outside the kernel — daemons, applications, and libraries (glibc)
  • The Filesystem Hierarchy Standard (FHS) defines the directory tree: /bin (essential binaries), /sbin (system binaries), /etc (config files), /var (variable data), /usr (user programs), /tmp (temporary), /proc (kernel/process virtual FS), /sys (device/driver info)
  • Major distro families: Red Hat (RHEL, CentOS, Fedora, Rocky, AlmaLinux) — RPM/DNF; Debian (Ubuntu, Mint) — DEB/APT; SUSE (openSUSE, SLES) — RPM/Zypper; Arch — Pacman
The Linux+ exam tests cross-distro knowledge. Understand both RPM-based and DEB-based package managers. RHEL derivatives dominate enterprise environments, so RPM/DNF commands get heavier exam weight.
BIOS/UEFI Boot Process & GRUB2

Boot Sequence

  • BIOS/UEFI performs POST (Power-On Self-Test), initializes hardware, then loads the bootloader from MBR (BIOS) or EFI partition (UEFI)
  • GRUB2 (GRand Unified Bootloader v2) is the standard Linux bootloader — loads the kernel and initramfs into memory
  • GRUB2 config location: /boot/grub2/grub.cfg (RHEL/Fedora) or /boot/grub/grub.cfg (Debian/Ubuntu)
  • Never edit grub.cfg directly — edit /etc/default/grub and regenerate with grub2-mkconfig -o /boot/grub2/grub.cfg
  • Key /etc/default/grub parameters: GRUB_TIMEOUT (menu delay), GRUB_CMDLINE_LINUX (kernel parameters), GRUB_DEFAULT (default entry)

GRUB2 Rescue Mode

  • If GRUB fails to find its modules, you drop to grub rescue> prompt
  • grub rescue> ls — list detected partitions (e.g., (hd0,gpt1), (hd0,gpt2))
  • grub rescue> ls (hd0,gpt2)/ — check for /boot/grub2/ directory
  • grub rescue> set root=(hd0,gpt2) — set the root partition
  • grub rescue> set prefix=(hd0,gpt2)/boot/grub2 — point to GRUB modules
  • grub rescue> insmod normal then grub rescue> normal — load normal GRUB mode
The exam tests the grub.cfg location — it is /boot/grub2/grub.cfg on RHEL/CentOS systems. Always regenerate with grub2-mkconfig after editing /etc/default/grub. Editing grub.cfg directly is wrong — changes are overwritten on update.
Systemd Boot Targets

Targets Replace SysV Runlevels

  • poweroff.target — runlevel 0, system halt
  • rescue.target — runlevel 1 / single-user mode; minimal services, root shell for recovery
  • multi-user.target — runlevel 3; full multi-user, no GUI; standard for servers
  • graphical.target — runlevel 5; multi-user with desktop environment
  • reboot.target — runlevel 6, system restart
  • systemctl get-default — view current default target
  • systemctl set-default multi-user.target — change default target persistently
  • systemctl isolate rescue.target — switch to rescue mode immediately (non-persistent)
To boot into a specific target temporarily, append systemd.unit=rescue.target to the kernel command line in GRUB. This is the standard recovery technique for forgotten root passwords alongside rd.break.
initramfs & Early Userspace

Purpose of initramfs

  • The initramfs (initial RAM filesystem) is a compressed cpio archive loaded into memory at boot before the real root filesystem is mounted
  • It provides the minimal tools needed to: load kernel modules for storage controllers, set up LVM/RAID, unlock encrypted volumes, and mount the real root filesystem
  • Lives at /boot/initramfs-$(uname -r).img (RHEL) or /boot/initrd.img-$(uname -r) (Debian)

Rebuilding initramfs

  • RHEL/Fedora: dracut --force /boot/initramfs-$(uname -r).img $(uname -r)
  • Debian/Ubuntu: update-initramfs -u -k $(uname -r)
  • Necessary after: adding kernel modules to initramfs, changing storage drivers, modifying /etc/crypttab
  • lsinitrd /boot/initramfs-$(uname -r).img | less — inspect initramfs contents (RHEL)
If a system fails to boot after adding an encrypted disk or new storage controller, rebuilding the initramfs with dracut --force (RHEL) or update-initramfs -u (Debian) is often the fix. The exam tests which tool to use per distro family.
Kernel Modules

Module Management Commands

  • lsmod — list currently loaded kernel modules and their dependencies
  • modinfo MODULE — display module metadata: description, author, parameters, filename
  • modprobe MODULE — load a module and its dependencies automatically
  • modprobe -r MODULE — remove (unload) a module and unused dependencies
  • rmmod MODULE — remove a module directly (does not handle dependencies)
  • insmod /path/to/module.ko — insert a module by file path (no dependency resolution)

Persistent Module Configuration

  • /etc/modprobe.d/ — directory for module configuration files (e.g., aliases, options, blacklisting)
  • Blacklist a module: create /etc/modprobe.d/blacklist-MODULE.conf with blacklist MODULE
  • Set module options: options MODULE param=value in a conf file under /etc/modprobe.d/
  • Modules to load at boot: list names in /etc/modules-load.d/*.conf files
Use modprobe over insmod in almost all cases — modprobe resolves dependencies automatically. insmod requires the full path and won't load required dependencies first.
02
Package Management & Software
5 lessons · ~4 hours
RPM-Based Package Management

rpm Command Essentials

  • rpm -ivh package.rpm — install a package (-i) with verbose output (-v) and progress bar (-h)
  • rpm -Uvh package.rpm — upgrade a package (installs if not present)
  • rpm -e PACKAGENAME — erase (remove) an installed package
  • rpm -qa — query all installed packages; combine with grep to search
  • rpm -qi PACKAGENAME — detailed info about an installed package
  • rpm -ql PACKAGENAME — list files owned by an installed package
  • rpm -qf /path/to/file — which package owns a given file
  • rpm -V PACKAGENAME — verify package integrity (checks checksums, permissions, ownership)
  • rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release — import a GPG signing key
rpm -V output codes: S=file size changed, M=mode changed, 5=MD5 checksum mismatch, U=user ownership changed. A dot (.) means no change. This is a frequently tested command on the Linux+ exam.
DNF / YUM Package Manager

Common DNF Commands

  • dnf install PACKAGE — install a package and resolve dependencies
  • dnf remove PACKAGE — remove a package
  • dnf update — update all packages to latest available versions
  • dnf update PACKAGE — update a specific package
  • dnf search KEYWORD — search for packages by name or description
  • dnf info PACKAGE — show detailed package metadata
  • dnf provides /path/to/file — find which package provides a file or command
  • dnf history — show transaction history; dnf history undo N reverses transaction N
  • dnf group install "Development Tools" — install a package group
  • dnf repolist — list enabled repositories

Repository Configuration

  • Repo files live in /etc/yum.repos.d/ with .repo extension
  • Key fields: [repo-id], name, baseurl or mirrorlist, enabled=1, gpgcheck=1, gpgkey=
  • dnf config-manager --add-repo URL — add a new repository
  • dnf config-manager --enable REPO_ID / --disable REPO_ID — toggle repos
Know the difference: dnf update refreshes and installs newer package versions; dnf upgrade is an alias. dnf check-update lists available updates without installing them. Also: yum is the legacy name — on modern RHEL 8+ systems it is a symlink to dnf.
DEB-Based Package Management

dpkg Command Essentials

  • dpkg -i package.deb — install a .deb package file
  • dpkg -r PACKAGENAME — remove a package (keeps config files)
  • dpkg -P PACKAGENAME — purge a package (removes config files too)
  • dpkg -l — list all installed packages with status codes
  • dpkg -L PACKAGENAME — list files installed by a package
  • dpkg -S /path/to/file — which package owns a given file
  • dpkg --get-selections | grep PACKAGENAME — check package installation status

APT Package Manager

  • apt update — refresh the local package index (downloads metadata from repos)
  • apt upgrade — install available package updates
  • apt install PACKAGE — install a package with dependencies
  • apt remove PACKAGE — remove package, keep config; apt purge PACKAGE removes config too
  • apt autoremove — remove packages that were installed as dependencies but are no longer needed
  • apt search KEYWORD — search packages; apt-cache search KEYWORD (older syntax)
  • apt show PACKAGE — show package details
  • Repo sources: /etc/apt/sources.list and /etc/apt/sources.list.d/*.list
apt update only refreshes the local package cache — it does NOT install any updates. apt upgrade installs the available updates. This two-step pattern is a classic exam question. Always run apt update before apt install on a freshly started system.
Zypper (SUSE) Package Manager

Core Zypper Commands

  • zypper install PACKAGE (or zypper in PACKAGE) — install a package
  • zypper remove PACKAGE (or zypper rm PACKAGE) — remove a package
  • zypper update (or zypper up) — update installed packages
  • zypper search KEYWORD (or zypper se KEYWORD) — search for packages
  • zypper info PACKAGE — display detailed package information
  • zypper repos (or zypper lr) — list configured repositories
  • zypper addrepo URL ALIAS — add a new repository
  • zypper refresh (or zypper ref) — refresh repository metadata
Zypper is SUSE/openSUSE-specific. The Linux+ exam may include one or two Zypper questions. Focus on the short-form aliases (in, rm, up, se, lr) as they appear in practical scenarios.
Compiling from Source

The Classic Build Workflow

  • Download source tarball: wget https://example.com/app-1.0.tar.gz then tar -xzf app-1.0.tar.gz
  • ./configure — checks for required build dependencies, sets compile options, generates Makefile
  • ./configure --prefix=/usr/local — install to a custom directory (default is /usr/local)
  • make — compiles the source code using the generated Makefile
  • make install — installs compiled binaries to the prefix directory
  • make uninstall — remove installed files (if the Makefile supports it)

Build Dependencies

  • Common required packages: gcc, make, autoconf, automake, libtool, kernel-devel
  • On RHEL: dnf groupinstall "Development Tools" installs the full toolchain
  • On Debian: apt install build-essential
  • Missing header errors during ./configure mean a -devel / -dev package is not installed
If ./configure fails with "missing library", install the corresponding -devel package (RHEL) or -dev package (Debian). The configure script reports the exact missing dependency in its error output.
03
User, Group & Permission Management
6 lessons · ~5 hours
User & Group Administration

User Account Commands

  • useradd USERNAME — create a new user; add -m to create home dir, -s /bin/bash to set shell, -u UID for specific UID
  • usermod -aG GROUP USERNAME — add user to a supplementary group (-a is critical — appends instead of replacing)
  • usermod -s /sbin/nologin USERNAME — disable login shell for a service account
  • userdel USERNAME — delete a user; userdel -r USERNAME also removes home directory and mail spool
  • id USERNAME — display UID, GID, and all supplementary groups for a user

Group Commands & Key Files

  • groupadd GROUPNAME — create a new group; groupmod -n NEWNAME OLDNAME — rename; groupdel GROUPNAME — delete
  • newgrp GROUPNAME — switch active primary group in the current session without logging out
  • /etc/passwd — format: username:x:UID:GID:comment:home:shell
  • /etc/shadow — format: username:hashed_password:last_change:min:max:warn:inactive:expire
  • /etc/group — format: groupname:x:GID:member1,member2
The -a flag with usermod -G is critical. Running usermod -G GROUP USER without -a REPLACES all supplementary groups with only the specified group — this is a classic misconfiguration that locks users out of shared resources.
Password Policies & PAM

passwd and chage

  • passwd USERNAME — set or change a user's password; passwd -l USERNAME locks, passwd -u USERNAME unlocks
  • chage -M 90 USERNAME — set maximum password age to 90 days
  • chage -m 7 USERNAME — set minimum days before password can be changed
  • chage -W 14 USERNAME — warn user 14 days before password expiration
  • chage -E 2026-12-31 USERNAME — set account expiration date
  • chage -l USERNAME — list all aging information for a user

PAM & Password Complexity

  • PAM (Pluggable Authentication Modules) controls authentication via /etc/pam.d/ config files
  • pam_pwquality module enforces password complexity (length, uppercase, digits, special characters)
  • Configure in /etc/security/pwquality.conf: minlen=12, ucredit=-1, dcredit=-1, ocredit=-1
  • /etc/login.defs — system-wide defaults: PASS_MAX_DAYS, PASS_MIN_DAYS, PASS_WARN_AGE, UID_MIN, UID_MAX
PAM modules are evaluated in order within each service file. The four control flags are: required (must pass, continues), requisite (must pass, stops on fail), sufficient (if pass, no further required checks), optional (result ignored unless only module).
Standard Linux Permissions

chmod, chown, chgrp

  • Permission bits: owner (u), group (g), others (o); each has read (r=4), write (w=2), execute (x=1)
  • chmod 755 file — numeric: owner rwx (7), group r-x (5), others r-x (5)
  • chmod u+x,g-w file — symbolic: add execute to owner, remove write from group
  • chmod -R 750 /dir — recursive permission change
  • chown USER:GROUP file — change owner and group; chown USER file — change owner only
  • chgrp GROUP file — change group ownership only

umask

  • umask defines default permissions by masking bits from 666 (files) and 777 (directories)
  • Default umask 022: files get 644 (rw-r--r--), directories get 755 (rwxr-xr-x)
  • umask 027: files get 640, directories get 750 — more restrictive, suitable for shared servers
  • Set persistently in /etc/bashrc or ~/.bashrc
For umask calculation: subtract the umask from the base permissions. File base = 666, directory base = 777. umask 027 on a file: 666 - 027 = 640 (rw-r-----). This subtraction method is what the exam tests.
Special Permission Bits

SUID, SGID, and Sticky Bit

  • SUID (Set User ID) — on an executable: the process runs as the file owner, not the invoking user. Example: /usr/bin/passwd runs as root. Set with chmod 4755 file or chmod u+s file
  • SGID (Set Group ID) — on an executable: process runs with the file's group. On a directory: new files/directories inherit the directory's group (not the creator's primary group). Set with chmod 2755 file or chmod g+s dir
  • Sticky Bit — on a directory: only the file owner, directory owner, or root can delete/rename files within it, even if others have write permission. Classic use: /tmp. Set with chmod 1777 dir or chmod +t dir
  • Display: ls -l shows s in place of x for SUID/SGID, t in place of x for sticky bit in others position
SUID on a directory has no standard effect — it is SGID on directories that causes group inheritance. The exam tests SGID on directories specifically as a mechanism for shared project directories where all files should belong to the project group.
Access Control Lists (ACLs)

setfacl and getfacl

  • getfacl file — display the full ACL for a file or directory
  • setfacl -m u:USERNAME:rwx file — grant a specific user rwx on a file
  • setfacl -m g:GROUPNAME:r-- file — grant a group read-only access
  • setfacl -x u:USERNAME file — remove a user's ACL entry
  • setfacl -b file — remove all ACL entries (except base permissions)
  • setfacl -m mask::r-- file — set the effective rights mask (limits maximum ACL permissions)

Default ACLs for Directories

  • setfacl -d -m u:USERNAME:rwx /dir — set a default ACL (-d): new files/directories created inside inherit this ACL
  • A + sign in ls -l output indicates ACL entries beyond standard permissions are set
  • Filesystem must be mounted with ACL support; ext4 and XFS support ACLs natively; check with tune2fs -l /dev/sdX | grep "Default mount"
The ACL mask acts as a maximum effective permission ceiling for all named users and groups (but NOT the file owner or other). Even if you grant a user rwx via ACL, the mask can reduce the effective permission to just read. Run getfacl to see effective permissions after mask application.
04
Storage & Filesystems
6 lessons · ~5 hours
Partitioning: fdisk, parted & gdisk

Partition Table Types

  • MBR (Master Boot Record) — legacy; max 4 primary partitions or 3 primary + 1 extended (with logical partitions); max disk size 2 TB
  • GPT (GUID Partition Table) — modern; supports up to 128 partitions per disk; required for disks over 2 TB; used with UEFI
  • fdisk /dev/sdX — interactive MBR partition editor; n (new), d (delete), t (change type), w (write), q (quit)
  • gdisk /dev/sdX — interactive GPT partition editor (same command letters as fdisk)
  • parted /dev/sdX — supports both MBR and GPT; non-interactive mode: parted /dev/sdX mklabel gpt
  • lsblk — list block devices and partition layout; blkid — show UUIDs and filesystem types
Use fdisk for MBR disks up to 2 TB. For GPT or disks larger than 2 TB, use gdisk or parted. The exam will present scenarios requiring you to identify the correct partitioning tool based on disk size and UEFI vs BIOS context.
Filesystem Creation & Mounting

Creating Filesystems

  • mkfs.ext4 /dev/sdX1 — format a partition as ext4; -L LABEL to add a volume label
  • mkfs.xfs /dev/sdX1 — format as XFS (default on RHEL 7+)
  • tune2fs -L NEWLABEL /dev/sdX1 — change ext2/3/4 volume label; tune2fs -c 50 — set max mount count before fsck
  • xfs_admin -L NEWLABEL /dev/sdX1 — change XFS volume label
  • e2fsck -f /dev/sdX1 — check and repair an ext filesystem (must be unmounted)
  • xfs_repair /dev/sdX1 — check and repair an XFS filesystem (must be unmounted)

Mounting & /etc/fstab

  • mount /dev/sdX1 /mnt/data — mount temporarily; mount -o ro /dev/sdX1 /mnt — mount read-only
  • umount /mnt/data — unmount; use lsof /mnt/data or fuser /mnt/data if "device busy"
  • /etc/fstab format: UUID=... /mountpoint fstype options dump pass
  • Use UUIDs (from blkid) in fstab, not device names — device names can change across reboots
  • Common mount options: defaults, noatime, nosuid, noexec, ro
  • mount -a — mount all entries in fstab that aren't already mounted (tests fstab syntax)
XFS filesystems cannot be shrunk — only grown. ext4 can be shrunk (offline only, with e2fsck first). The exam will present a scenario asking about resizing, and the correct answer depends on the filesystem type.
LVM: Logical Volume Manager

LVM Layer Architecture

  • PV (Physical Volume) — raw disks or partitions initialized for LVM use
  • VG (Volume Group) — pool of storage combining one or more PVs
  • LV (Logical Volume) — virtual partitions carved from a VG; flexible and resizable

LVM Commands

  • pvcreate /dev/sdX — initialize a physical volume
  • vgcreate myvg /dev/sdX — create a volume group; vgextend myvg /dev/sdY — add a PV to VG
  • lvcreate -L 20G -n mylv myvg — create a 20G logical volume
  • lvextend -L +10G /dev/myvg/mylv — increase LV size by 10G
  • After lvextend on ext4: resize2fs /dev/myvg/mylv to grow the filesystem
  • After lvextend on XFS: xfs_growfs /mountpoint to grow the filesystem (XFS grow is online)
  • lvreduce -L -5G /dev/myvg/mylv — decrease LV size (ext4 only, must unmount first)
  • pvs / vgs / lvs — brief display of PV/VG/LV information
  • pvdisplay / vgdisplay / lvdisplay — detailed output
  • lvcreate -L 5G -s -n snap /dev/myvg/mylv — create a snapshot of an LV
To shrink an ext4 LV: unmount → e2fsck -fresize2fs to new smaller size → lvreduce. XFS cannot be shrunk at all. The exam frequently tests this order-of-operations for both grow and shrink scenarios.
RAID with mdadm

RAID Levels

  • RAID 0 (striping) — performance, no redundancy; min 2 disks; if one disk fails, all data is lost
  • RAID 1 (mirroring) — full redundancy, 50% usable capacity; min 2 disks; can lose N-1 disks
  • RAID 5 (distributed parity) — min 3 disks; can lose 1 disk; usable capacity = (N-1) disks
  • RAID 6 (double parity) — min 4 disks; can lose 2 disks; usable = (N-2) disks
  • RAID 10 (stripe of mirrors) — min 4 disks; high performance + redundancy; can lose 1 disk per mirror set

mdadm Commands

  • mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd{b,c,d} — create RAID 5
  • cat /proc/mdstat — view RAID status and rebuild progress
  • mdadm --detail /dev/md0 — detailed RAID array info
  • mdadm --add /dev/md0 /dev/sde — add a hot spare or replacement drive
  • mdadm --fail /dev/md0 /dev/sdb — mark a drive as failed; mdadm --remove /dev/md0 /dev/sdb — remove it
  • /etc/mdadm.conf or /etc/mdadm/mdadm.conf — persist RAID config across reboots
Software RAID with mdadm is flexible and independent of hardware controllers. RAID 5 with 3 disks gives 2 disks of usable space. RAID 6 with 4 disks gives 2 disks of usable space. RAID 10 with 4 disks gives 2 disks of usable space — but RAID 10 is faster and more resilient for databases.
Swap Space Management

Creating and Managing Swap

  • mkswap /dev/sdX2 — format a partition as swap space
  • swapon /dev/sdX2 — activate swap; swapoff /dev/sdX2 — deactivate
  • swapon -s (or swapon --show) — list active swap devices with priority and usage
  • Persist in /etc/fstab: UUID=... none swap sw 0 0
  • Swap file: fallocate -l 2G /swapfilechmod 600 /swapfilemkswap /swapfileswapon /swapfile
  • cat /proc/sys/vm/swappiness — view swappiness (default 60); lower values reduce swap aggressiveness
  • sysctl vm.swappiness=10 — set swappiness temporarily; persist in /etc/sysctl.d/99-swap.conf
Disk Encryption with LUKS

cryptsetup & LUKS

  • cryptsetup luksFormat /dev/sdX1 — initialize a LUKS encrypted container (destroys data)
  • cryptsetup luksOpen /dev/sdX1 cryptdata — unlock the container; creates /dev/mapper/cryptdata
  • mkfs.ext4 /dev/mapper/cryptdata — create filesystem on the unlocked device
  • mount /dev/mapper/cryptdata /mnt/secure — mount the encrypted filesystem
  • cryptsetup luksClose cryptdata — lock/close the container
  • cryptsetup luksDump /dev/sdX1 — display LUKS header info (slots used, cipher)
  • /etc/crypttab — maps LUKS devices to mapper names for auto-unlock at boot (with keyfile or passphrase prompt)
After adding a LUKS-encrypted device to /etc/crypttab and /etc/fstab, you must rebuild the initramfs (dracut --force on RHEL) so the initramfs includes the cryptsetup tools needed to unlock the device early in the boot process.
05
Networking
6 lessons · ~5 hours
The ip Command Suite

Replacing Legacy net-tools

  • The ip command (from the iproute2 package) replaces deprecated tools: ifconfig, route, arp, netstat
  • ip addr show (or ip a) — display IP addresses on all interfaces
  • ip addr add 192.168.1.10/24 dev eth0 — assign an IP address (non-persistent)
  • ip addr del 192.168.1.10/24 dev eth0 — remove an IP address
  • ip link show — display network interface state; ip link set eth0 up/down — bring interface up/down
  • ip route show (or ip r) — display the routing table
  • ip route add default via 192.168.1.1 — add a default gateway (non-persistent)
  • ip route add 10.0.0.0/8 via 192.168.1.254 dev eth0 — add a static route
  • ip neigh show — display the ARP/neighbor cache
Changes made with the ip command are not persistent across reboots. For persistent configuration, use NetworkManager (nmcli) or edit interface config files in /etc/NetworkManager/system-connections/.
NetworkManager: nmcli & nmtui

nmcli Commands

  • nmcli connection show — list all configured network connections
  • nmcli connection show --active — list only active connections
  • nmcli connection up CONNECTION_NAME — activate a connection
  • nmcli connection down CONNECTION_NAME — deactivate a connection
  • nmcli connection modify CONNECTION_NAME ipv4.addresses 192.168.1.50/24 — set static IP
  • nmcli connection modify CONNECTION_NAME ipv4.gateway 192.168.1.1 — set gateway
  • nmcli connection modify CONNECTION_NAME ipv4.dns "8.8.8.8 8.8.4.4" — set DNS servers
  • nmcli connection modify CONNECTION_NAME ipv4.method manual — switch from DHCP to static
  • nmcli device wifi list — list available Wi-Fi networks
  • nmtui — text-based interactive UI for NetworkManager; useful when no GUI is available

Config File Location

  • Connection profiles: /etc/NetworkManager/system-connections/ (keyfile format in RHEL 8+)
  • Restart networking: nmcli connection reload then nmcli connection up CONNECTION
DNS & Name Resolution

Key Files & Tools

  • /etc/hosts — static hostname-to-IP mappings; checked before DNS by default
  • /etc/resolv.conf — specifies DNS servers (nameserver 8.8.8.8) and search domains (search example.com)
  • /etc/nsswitch.conf — controls lookup order; the hosts: line (typically files dns) determines whether /etc/hosts is checked before DNS
  • dig DOMAIN — detailed DNS query; dig @8.8.8.8 DOMAIN — query specific server; dig -x IP — reverse lookup
  • nslookup DOMAIN — simple DNS query (older tool); interactive mode: nslookup then server 8.8.8.8
  • host DOMAIN — quick DNS lookup; host IP — reverse DNS
  • systemd-resolve --status — show DNS configuration used by systemd-resolved
The order of name resolution is controlled by /etc/nsswitch.conf, not /etc/resolv.conf. If the exam asks how to make /etc/hosts take precedence over DNS, the answer is to check the hosts: line in nsswitch.conf — it must list files before dns.
Firewall: iptables, ufw & firewalld

iptables Fundamentals

  • Three main chains: INPUT (packets destined for the local system), OUTPUT (packets originating from local system), FORWARD (routed packets)
  • iptables -L -n -v --line-numbers — list all rules with line numbers and packet counts
  • iptables -A INPUT -p tcp --dport 22 -j ACCEPT — append rule to allow SSH
  • iptables -I INPUT 1 -p tcp --dport 80 -j ACCEPT — insert rule at position 1
  • iptables -D INPUT 3 — delete rule by line number
  • iptables -A INPUT -s 10.0.0.5 -j DROP — drop all traffic from a source IP
  • iptables -P INPUT DROP — set default policy to DROP (deny-all baseline)
  • Persist: iptables-save > /etc/iptables/rules.v4; restore: iptables-restore < /etc/iptables/rules.v4

ufw & firewalld

  • ufw enable — enable the UFW firewall (Ubuntu/Debian); ufw allow 22/tcp — allow SSH; ufw deny 23 — block telnet
  • firewalld — zone-based firewall on RHEL/Fedora; firewall-cmd --list-all — show active zone config
  • firewall-cmd --permanent --add-service=http — allow HTTP; --reload applies permanent changes
iptables rules are evaluated top-to-bottom; the first matching rule wins. This means more specific rules must come before general rules. If you have a DROP rule at position 1 and an ACCEPT for SSH at position 2, SSH will be blocked. Use --line-numbers and -I (insert) for targeted rule placement.
SSH Configuration & Key-Based Auth

Generating & Deploying SSH Keys

  • ssh-keygen -t ed25519 -C "comment" — generate an Ed25519 key pair (recommended over RSA for new keys)
  • ssh-keygen -t rsa -b 4096 — generate 4096-bit RSA key pair
  • Private key: ~/.ssh/id_ed25519 (protect with chmod 600); Public key: ~/.ssh/id_ed25519.pub
  • ssh-copy-id user@host — copy public key to remote host's ~/.ssh/authorized_keys
  • ~/.ssh/authorized_keys on the server: must be chmod 600 and owned by the user
  • ~/.ssh/ directory: must be chmod 700 and owned by the user

sshd_config Hardening

  • PasswordAuthentication no — disable password auth (force key-based only)
  • PermitRootLogin no — prevent direct root login via SSH
  • AllowUsers alice bob — whitelist specific users; all others denied
  • Port 2222 — change SSH port (security through obscurity; adjust firewall accordingly)
  • ClientAliveInterval 300 / ClientAliveCountMax 2 — disconnect idle sessions after 10 minutes
  • After editing /etc/ssh/sshd_config: systemctl restart sshd
Network Troubleshooting

Essential Diagnostic Commands

  • ping -c 4 HOST — test basic connectivity; ping -I eth0 HOST — send from specific interface
  • traceroute HOST — show path packets take (hop-by-hop); tracepath HOST — similar but no root required
  • ss -tulnp — list listening TCP (t) and UDP (u) sockets with process names; replaces netstat -tulnp
  • tcpdump -i eth0 port 80 — capture HTTP traffic; tcpdump -i eth0 -w capture.pcap — write to file
  • tcpdump -i eth0 host 10.0.0.5 and tcp — filter by host and protocol
  • curl -I https://example.com — fetch HTTP headers only; curl -v URL — verbose output showing TLS handshake
  • wget -O /dev/null URL — test download speed; wget --spider URL — check URL without downloading
  • nc -zv HOST PORT — test if a TCP port is open (netcat); nc -l 8080 — listen on port 8080
ss -tulnp is the modern replacement for netstat -tulnp. The flags: -t TCP, -u UDP, -l listening only, -n show numbers not names, -p show process info. The Linux+ exam may test either command.
06
Security & Hardening
6 lessons · ~5 hours
SELinux

SELinux Modes & Status

  • Enforcing — SELinux policy is enforced; violations are blocked and logged
  • Permissive — violations are logged but NOT blocked; useful for troubleshooting and policy development
  • Disabled — SELinux is completely turned off; requires reboot + relabeling to re-enable
  • getenforce — display current mode (Enforcing/Permissive/Disabled)
  • setenforce 0 — switch to Permissive (temporary, survives only until reboot)
  • setenforce 1 — switch to Enforcing (temporary)
  • Persistent mode: edit /etc/selinux/config → set SELINUX=enforcing / permissive / disabled

File Context Management

  • ls -Z /var/www/html/ — show SELinux file context labels
  • restorecon -Rv /var/www/html/ — restore default SELinux contexts recursively (fixes "wrong context" denials)
  • chcon -t httpd_sys_content_t /new/file — change file context temporarily (overridden by restorecon)
  • semanage fcontext -a -t httpd_sys_content_t "/newpath(/.*)?" — add a persistent context rule
  • restorecon -Rv /newpath — apply the newly added context rule
  • getsebool -a | grep httpd — list all SELinux booleans related to httpd
  • setsebool -P httpd_can_network_connect on — enable a boolean persistently (-P)

Analyzing Denials

  • audit2why < /var/log/audit/audit.log — explain why actions were denied
  • audit2allow -M mypolicy < /var/log/audit/audit.log — generate a custom allow policy module from denials
  • semodule -i mypolicy.pp — install a custom SELinux policy module
After using chcon to change a file context, the change is temporary. A subsequent restorecon will reset it back to the default policy label. The correct permanent approach is semanage fcontext followed by restorecon. The exam tests this two-step workflow.
AppArmor

AppArmor Overview & Status

  • AppArmor is the MAC (Mandatory Access Control) framework used on Debian/Ubuntu systems (vs SELinux on RHEL)
  • aa-status — show AppArmor status: profiles loaded, enforcement mode per profile
  • Profile modes: enforce (blocks violations), complain (logs but does not block — like SELinux permissive)
  • aa-enforce /etc/apparmor.d/usr.sbin.nginx — put a profile into enforce mode
  • aa-complain /etc/apparmor.d/usr.sbin.nginx — put into complain (logging) mode
  • Profile files live in: /etc/apparmor.d/
  • apparmor_parser -r /etc/apparmor.d/PROFILE — reload a profile after editing
  • aa-genprof /path/to/binary — generate a new profile interactively by watching program behavior
AppArmor profiles are path-based (they restrict what files a program can access by pathname). SELinux is label-based (it uses extended attribute labels on files and processes). The Linux+ exam may ask which system uses which approach.
GPG Encryption & Signing

GPG Key Management

  • gpg --gen-key — generate a new GPG key pair interactively
  • gpg --list-keys — list all keys in the public keyring
  • gpg --export -a "User Name" > public.key — export public key to ASCII-armored file
  • gpg --import public.key — import a public key from a file
  • gpg --keyserver keyserver.ubuntu.com --recv-keys KEY_ID — download a key from a keyserver

Encryption, Signing & Verification

  • gpg --encrypt -r "Recipient" file — encrypt file for recipient (produces file.gpg)
  • gpg --decrypt file.gpg > file — decrypt a file
  • gpg --sign file — create a signed version of a file (embedded signature)
  • gpg --detach-sign file — create a separate file.sig signature file
  • gpg --verify file.sig file — verify a detached signature
  • RPM uses GPG to sign packages — rpm -K package.rpm verifies the package signature
System Hardening

PAM-Based Controls

  • /etc/pam.d/ — service-specific PAM configuration; system-auth and password-auth are key files on RHEL
  • pam_pwquality.so — enforces password complexity rules configured in /etc/security/pwquality.conf
  • faillock — PAM module that locks accounts after N failed login attempts; check with faillock --user USERNAME; reset with faillock --user USERNAME --reset
  • /etc/security/limits.conf — set per-user/group resource limits: nofile (open files), nproc (processes), memlock (locked memory)
  • ulimit -n — show current shell's open file descriptor limit; ulimit -n 65536 — set for current session

sudoers Configuration

  • Always edit sudoers with visudo — validates syntax before saving, preventing lockouts
  • Format: USER HOST=(RUNAS) COMMANDS — e.g., alice ALL=(ALL) ALL
  • Group syntax: %admins ALL=(ALL) ALL
  • NOPASSWD: bob ALL=(ALL) NOPASSWD: /usr/bin/systemctl restart nginx
  • Restrict to specific commands: carol ALL=(root) /usr/bin/dnf install, /usr/bin/dnf remove
  • Include drop-in files: /etc/sudoers.d/ — add separate files here rather than editing /etc/sudoers directly
Log Security & Auditing

Key Log Files

  • /var/log/auth.log (Debian) or /var/log/secure (RHEL) — authentication events: logins, sudo usage, SSH attempts
  • /var/log/messages (RHEL) or /var/log/syslog (Debian) — general system messages
  • journalctl -u sshd --since "1 hour ago" — filter systemd journal by unit and time
  • journalctl _COMM=sshd — all journal entries from the sshd process
  • journalctl -p err — show only error-level messages
  • lastb — list bad (failed) login attempts from /var/log/btmp
  • last — list successful logins from /var/log/wtmp
  • who / w — currently logged-in users

auditd

  • auditd — the Linux audit daemon; writes security events to /var/log/audit/audit.log
  • auditctl -w /etc/passwd -p wa -k passwd_changes — watch /etc/passwd for write and attribute changes
  • ausearch -k passwd_changes — search audit log by key
  • ausearch -ua USERNAME — search audit events by user
  • aureport --summary — summary of audit events by category
  • Persistent rules: /etc/audit/rules.d/audit.rules
audit2allow generates allow rules from denial messages but always review the output — it may create overly permissive rules. Use the minimum necessary permissions and apply the principle of least privilege. The exam may test whether you know audit2why (explains denials) vs audit2allow (generates allow policy).
07
Scripting, Containers & Troubleshooting
5 lessons · ~7 hours
Bash Scripting Fundamentals

Script Structure & Variables

  • Always start with a shebang: #!/bin/bash (or #!/usr/bin/env bash for portability)
  • Make executable: chmod +x script.sh; run with ./script.sh or bash script.sh
  • Variables: NAME="Alice" (no spaces around =); reference with $NAME or ${NAME}
  • Command substitution: DATE=$(date +%Y-%m-%d) — stores command output in variable
  • Special variables: $? (exit code of last command), $# (number of arguments), $@ (all arguments as separate strings), $0 (script name), $1-$9 (positional arguments)

Control Flow

  • If/elif/else: if [ "$VAR" = "value" ]; then ... elif [ condition ]; then ... else ... fi
  • For loop: for FILE in /etc/*.conf; do echo "$FILE"; done
  • While loop: while [ $COUNT -lt 10 ]; do ((COUNT++)); done
  • Until loop: until ping -c1 HOST >/dev/null 2>&1; do sleep 5; done
  • Functions: function check_service() { systemctl is-active "$1" || return 1; }

test Operators

  • File tests: -f (regular file), -d (directory), -e (exists), -r (readable), -w (writable), -x (executable), -s (non-empty)
  • String tests: -z (zero length / empty), -n (non-zero length / not empty), = (equal), != (not equal)
  • Integer comparison: -eq, -ne, -lt, -le, -gt, -ge
$? must be checked immediately after the command it refers to — the very next command overwrites it. A common pattern: command; RC=$?; if [ $RC -ne 0 ]; then .... Also: set -e at the top of a script causes it to exit immediately on any non-zero return code.
Text Processing Toolkit

Core Text Processing Commands

  • grep -E "pattern" file — extended regex search; -i case-insensitive; -r recursive; -v invert match; -l filenames only; -c count matches
  • grep -P "\d{3}-\d{4}" file — Perl-compatible regex for complex patterns
  • sed 's/old/new/g' file — substitute all occurrences; -i flag edits file in-place; sed -n '5,10p' file — print lines 5–10
  • awk '{print $1, $3}' file — print fields 1 and 3; awk -F: '{print $1}' /etc/passwd — use colon as delimiter
  • awk '$3 > 1000 {print $1}' /etc/passwd — conditional: print username if UID > 1000
  • cut -d: -f1,3 /etc/passwd — cut fields 1 and 3 from colon-delimited file
  • sort -k3 -n file — sort numerically by field 3; sort -r reverse order; sort -u unique lines
  • uniq -c — count duplicate consecutive lines; always pipe through sort first
  • tr 'a-z' 'A-Z' — translate lowercase to uppercase; tr -d '\r' — remove carriage returns
  • wc -l file — count lines; wc -w words; wc -c bytes
  • head -n 20 file — first 20 lines; tail -n 20 file — last 20 lines; tail -f /var/log/syslog — follow a file live
Combining text processing tools with pipes is a core Linux skill. A common exam pattern: cat /etc/passwd | awk -F: '$3 >= 1000 {print $1}' | sort — list all regular users sorted alphabetically. Know how to chain grep | awk | sort | uniq pipelines.
Containers: Docker & Podman

Core Container Commands

  • docker run -d -p 8080:80 --name webserver nginx — run nginx in background, map port 8080→80
  • docker ps — list running containers; docker ps -a — all containers including stopped
  • docker images — list local images
  • docker pull IMAGE:TAG — pull an image from registry
  • docker exec -it CONTAINER bash — interactive shell in a running container
  • docker logs CONTAINER — view container logs; docker logs -f CONTAINER — follow
  • docker stop CONTAINER — gracefully stop; docker rm CONTAINER — remove stopped container
  • docker rmi IMAGE — remove an image
  • docker run -v /host/path:/container/path IMAGE — bind mount a host directory into a container
  • podman — drop-in Docker replacement; rootless and daemonless by design; commands are identical to Docker in most cases

Writing a Dockerfile

  • FROM ubuntu:22.04 — base image
  • RUN apt-get update && apt-get install -y nginx — execute commands during image build
  • COPY ./app /var/www/html/ — copy files from build context into image
  • ENV APP_ENV=production — set environment variable
  • EXPOSE 80 — document which port the container listens on (does not publish)
  • CMD ["nginx", "-g", "daemon off;"] — default command to run (can be overridden)
  • ENTRYPOINT ["/entrypoint.sh"] — executable that always runs (CMD becomes its arguments)
  • docker build -t myapp:1.0 . — build image from Dockerfile in current directory
Podman is daemonless — it does not require a background daemon process. This makes rootless containers possible (regular users can run containers without sudo). Docker requires the docker daemon running as root. The Linux+ exam specifically tests this architectural difference.
Ansible Automation

Inventory & Ad-Hoc Commands

  • Inventory file (/etc/ansible/hosts or custom file with -i): groups of hosts in INI or YAML format
  • ansible all -m ping — test connectivity to all hosts in inventory
  • ansible webservers -m shell -a "df -h" — run shell command on webservers group
  • ansible all -m copy -a "src=/etc/hosts dest=/tmp/hosts" — copy file to all hosts
  • ansible all -m service -a "name=nginx state=started" — ensure nginx is running
  • ansible all -b -m dnf -a "name=httpd state=present" — install httpd (become=sudo)

Playbook Structure

  • Playbooks are YAML files defining plays (which hosts to target) and tasks (what to do)
  • Key modules: apt/dnf (package management), copy (copy files), template (Jinja2 templates), service (manage services), user (manage users), file (manage file permissions/ownership)
  • ansible-playbook site.yml — run a playbook
  • ansible-playbook site.yml --check — dry run (shows what would change without changing it)
  • ansible-playbook site.yml -v / -vvv — verbose output for debugging
  • ansible-playbook site.yml --limit webservers — run only against a specific group
Ansible is agentless — it uses SSH to connect to managed nodes. No agent software is needed on managed hosts. Only Python must be installed on the target. Ansible is idempotent by design: running a playbook multiple times produces the same result.
Troubleshooting Workflow

Systematic Troubleshooting Steps

  • Step 1 — Service status: systemctl status servicename — shows active/failed state, last 10 log lines, and exit code
  • Step 2 — Journal: journalctl -xe -u servicename — full journal with explanations; journalctl --boot -1 — previous boot logs
  • Step 3 — Kernel messages: dmesg | tail -50 — recent kernel messages; dmesg | grep -i error
  • Step 4 — Resources: check disk (df -h, du -sh /*), memory (free -h), CPU (top, vmstat 1 5), I/O (iostat -x 1 5)
  • Step 5 — OOM killer: dmesg | grep -i "out of memory" or grep -i oom /var/log/kern.log — identifies memory-killed processes

Common Failure Scenarios

  • Service fails to start: check systemctl status and journalctl -xe for the exact error; often a config file syntax error or missing dependency
  • Disk full: df -h to identify full filesystem; du -sh /var/log/* to find large log files; journalctl --vacuum-size=500M to trim journal
  • High CPU: top sorted by CPU (P key); ps aux --sort=-%cpu | head
  • High memory / swap usage: free -h; vmstat 1 to watch memory pressure; consider increasing swap or identifying memory leaks with ps aux --sort=-%mem
  • Cannot SSH to host: check firewall (iptables -L or firewall-cmd --list-all), sshd status, SELinux (getenforce), ss -tlnp | grep 22
The Linux+ exam includes scenario-based troubleshooting questions. Always follow the systematic flow: status → logs → kernel → resources. The OOM (Out Of Memory) killer is a specific Linux kernel feature that kills processes to free memory under extreme memory pressure — look for it in dmesg or /var/log/kern.log, not just /var/log/messages.

Ready to test your Linux+ knowledge?

Challenge yourself with 60 CompTIA Linux+ practice questions — scenario-based, exam-realistic, and free.

← Back to all courses