HashiCorp · devops

Terraform Associate (003)

Learn infrastructure as code with Terraform. HCL syntax, state management, modules, providers, and cloud provisioning workflows.

9Modules
25 hoursDuration
intermediateLevel

Course Modules

01
Infrastructure as Code Concepts
2 lessons
What is Infrastructure as Code

Key Concepts

  • Declarative vs Imperative: Declarative IaC (like Terraform) describes the desired end state and the tool figures out how to achieve it. Imperative IaC (like scripting with AWS CLI or Bash) specifies the exact step-by-step commands to execute. Terraform’s declarative model is simpler to maintain because you describe “what” not “how”
  • Idempotency: Running the same Terraform configuration multiple times produces the same result. If the infrastructure already matches the desired state, Terraform makes no changes. This prevents configuration drift and makes re-runs safe
  • Version Control: IaC files are stored in Git repositories, enabling pull request reviews, change history, rollback capabilities, and collaborative workflows. Every infrastructure change is tracked, auditable, and reproducible
  • Key IaC Benefits: Eliminates manual, error-prone provisioning. Enables consistent environments across dev, staging, and production. Supports rapid disaster recovery by re-provisioning from code. Reduces time-to-deploy from hours to minutes
  • IaC Tools Landscape: Terraform (multi-cloud, declarative, HCL), AWS CloudFormation (AWS-only, JSON/YAML), Azure Resource Manager (Azure-only), Pulumi (multi-cloud, general-purpose languages), Ansible (configuration management, procedural). Terraform stands out for its cloud-agnostic provider model
The exam expects you to clearly explain why IaC matters and how Terraform fits into the IaC landscape. Understand that Terraform is declarative and cloud-agnostic — it can manage AWS, Azure, GCP, and hundreds of other providers with a single workflow. Be ready to contrast declarative vs imperative approaches and explain why idempotency is critical for reliable infrastructure automation.
Benefits of IaC & Terraform’s Role

Key Concepts

  • Consistency & Reproducibility: Terraform ensures every environment is provisioned identically from the same configuration files. No more “works on my machine” problems — dev, staging, and production are created from the same code
  • Collaboration & Review: Infrastructure changes go through the same code review process as application code. Teams use pull requests to propose changes, run terraform plan in CI pipelines, and approve before applying
  • Terraform’s Provider Ecosystem: Terraform uses providers as plugins to interact with cloud platforms, SaaS tools, and other APIs. The Terraform Registry hosts thousands of providers and modules, making it the most extensible IaC tool available
  • State as Source of Truth: Terraform maintains a state file that maps your configuration to real-world resources. This allows Terraform to detect drift, plan incremental changes, and destroy resources cleanly when they are removed from code
  • Execution Plans: Before making any change, terraform plan shows exactly what will be created, modified, or destroyed. This safety net prevents surprises and gives teams confidence before applying changes to production
For the exam, emphasize that Terraform’s key differentiators are its execution plan (preview before apply), resource graph (parallel resource creation based on dependency analysis), and provider model (cloud-agnostic). Understand that Terraform is not a configuration management tool like Ansible — Terraform provisions infrastructure, while Ansible configures software on that infrastructure. They are complementary, not competing.
02
Terraform Basics
3 lessons
Install & Configure Terraform

Key Concepts

  • Installation: Terraform is distributed as a single binary with no dependencies. Download it from releases.hashicorp.com or install via package managers (apt, yum, brew, choco). Verify with terraform version. Multiple versions can be managed with tools like tfenv
  • Provider Authentication: Terraform authenticates to cloud providers using credentials configured outside of Terraform code. For AWS, use environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY), shared credentials file (~/.aws/credentials), or IAM instance profiles. Never hardcode credentials in .tf files
  • Terraform Settings Block: The terraform {} block configures Terraform itself: required_version constrains the CLI version, required_providers declares providers and their version constraints, and backend configures where state is stored
  • File Structure: Terraform loads all .tf files in the working directory. Common convention: main.tf (resources), variables.tf (input variables), outputs.tf (output values), providers.tf (provider config), terraform.tfvars (variable values)
  • .terraform Directory: Created by terraform init, this hidden directory stores downloaded provider plugins and module source code. It should be added to .gitignore as it contains binaries and can be regenerated at any time
The exam tests your understanding of how Terraform is installed and configured, not the specific installation steps for each OS. Know that terraform init must be run before any other command to download providers and initialize the backend. Understand the purpose of the terraform {} settings block and why credential management should use environment variables or provider-specific mechanisms rather than hardcoded values.
HCL Syntax: Blocks, Arguments & Expressions

Key Concepts

  • Block Structure: HCL (HashiCorp Configuration Language) uses blocks as the fundamental unit: block_type "label1" "label2" { ... }. Resource blocks have two labels (type and name): resource "aws_instance" "web" { ... }. Arguments inside blocks assign values: ami = "ami-0c55b159cbfafe1f0"
  • Data Types: Primitive types include string, number, and bool. Complex types include list(type) (ordered collection), map(type) (key-value pairs), set(type) (unordered unique values), object({...}) (named attributes), and tuple([...]) (ordered mixed types)
  • Expressions: References use dot notation: aws_instance.web.id accesses an attribute. String interpolation: "Hello, ${var.name}". Conditional: condition ? true_val : false_val. For expressions: [for s in var.list : upper(s)]
  • Comments: Single-line comments use # or //. Multi-line comments use /* ... */. Comments are essential for documenting complex configurations and explaining non-obvious design decisions
  • Heredoc Syntax: Multi-line strings use <<EOF ... EOF or indented heredocs <<-EOF ... EOF (which strips leading whitespace). Useful for inline policies, user data scripts, and multi-line configuration values
HCL syntax is fundamental to every exam question that includes code snippets. Practice reading HCL blocks and identifying the block type, labels, and arguments. Know the difference between = (argument assignment) and {} (nested block). Remember that Terraform processes all .tf files in a directory as a single configuration — the order of blocks across files does not matter because Terraform builds a dependency graph.
Terraform Workflow: init, plan, apply, destroy

Key Concepts

  • terraform init: Initializes the working directory by downloading provider plugins, installing modules, and configuring the backend. Must be re-run when providers, modules, or backend configuration change. Use -upgrade to update provider versions within constraints
  • terraform plan: Creates an execution plan showing what Terraform will do without making any changes. Output uses + for create, - for destroy, ~ for update in-place, and -/+ for destroy and recreate. Save plans with -out=plan.tfplan for safe apply
  • terraform apply: Executes the planned changes to reach the desired state. Without a saved plan, it runs an implicit plan and prompts for confirmation. Use -auto-approve to skip confirmation in CI/CD pipelines (with caution). After apply, the state file is updated
  • terraform destroy: Removes all resources managed by the configuration. Equivalent to removing all resources from code and running apply. Prompts for confirmation. Use -target to destroy specific resources, though this is discouraged for routine use
  • terraform validate & fmt: validate checks configuration syntax and internal consistency without accessing providers. fmt rewrites files to the canonical HCL format. Both are commonly run in CI pipelines as pre-merge checks
The core workflow (init, plan, apply) is tested heavily on the exam. Remember that terraform plan is read-only and safe to run at any time. Always review the plan output before applying. In production workflows, save the plan to a file and pass it to apply to ensure exactly what was reviewed gets executed. Understand that terraform destroy is a convenience command — the same result can be achieved by removing resources from the configuration and running apply.
03
Providers
2 lessons
Provider Configuration & Versioning

Key Concepts

  • What Are Providers: Providers are plugins that let Terraform interact with APIs of cloud platforms (AWS, Azure, GCP), SaaS services (GitHub, Datadog), and other tools (Kubernetes, Helm). Each provider adds resource types and data sources specific to that service
  • Provider Block: Configured with provider "aws" { region = "us-east-1" }. The provider block sets connection parameters like region, credentials, and endpoint URLs. Provider configuration is separate from resource definitions
  • Version Constraints: Declared in the required_providers block using operators: = (exact), >= (minimum), ~> (pessimistic, allows only rightmost version increment). Example: version = "~> 5.0" allows 5.x but not 6.0. The .terraform.lock.hcl file pins exact versions after init
  • Dependency Lock File: .terraform.lock.hcl records the exact provider versions and hashes selected during terraform init. This file should be committed to version control to ensure all team members and CI pipelines use identical provider versions
  • Provider Registry: The Terraform Registry (registry.terraform.io) is the default source for providers. Provider addresses follow the format namespace/type (e.g., hashicorp/aws). Custom or private registries can be configured for internal providers
The exam frequently tests provider versioning. Know the difference between >=, ~>, and = constraints. The pessimistic constraint operator ~> is the most commonly recommended: ~> 5.0 allows 5.0 through 5.x, while ~> 5.1.0 allows 5.1.0 through 5.1.x. Always commit the .terraform.lock.hcl file but never commit the .terraform directory. Understand that terraform init -upgrade updates providers within the allowed version constraints.
Multiple Provider Instances & Aliases

Key Concepts

  • Provider Aliases: When you need multiple configurations of the same provider (e.g., deploying to two AWS regions), define additional instances with the alias argument: provider "aws" { alias = "west" region = "us-west-2" }. Resources reference aliases with provider = aws.west
  • Default vs Aliased Providers: A provider block without an alias is the default for that provider type. Resources use the default provider unless they explicitly reference an alias. Only one default provider per type is allowed
  • Multi-Region Deployments: Common pattern: define a default provider for the primary region and aliased providers for secondary regions. Resources like DR replicas or CloudFront origins reference the aliased provider for the target region
  • Passing Providers to Modules: When a module needs a non-default provider, pass it via the providers meta-argument in the module block: module "west_vpc" { providers = { aws = aws.west } }. This keeps modules flexible and reusable across regions
  • Multi-Cloud Configurations: A single Terraform configuration can use multiple providers (e.g., AWS and Cloudflare) simultaneously. Each provider manages its own set of resources independently, and Terraform handles the dependency graph across all providers
Provider aliases appear frequently on the exam in multi-region scenarios. Remember the syntax: define the alias in the provider block, then reference it in resources with the provider meta-argument. A common exam question pattern gives you two provider blocks for different regions and asks which resources will be created where. The key rule: resources without an explicit provider argument use the default (non-aliased) provider.
04
Resources & Data Sources
3 lessons
Resource Blocks & Lifecycle

Key Concepts

  • Resource Block Syntax: resource "provider_type" "local_name" { ... }. The type determines the infrastructure object (e.g., aws_instance, azurerm_resource_group). The local name is an identifier used to reference the resource within the configuration
  • Resource Behavior: When you add a resource block and apply, Terraform creates the real infrastructure. When you change arguments, Terraform updates in-place if possible or destroys and recreates if the change is destructive (forces replacement). When you remove a block, Terraform destroys the resource
  • Implicit Dependencies: Terraform automatically detects dependencies when one resource references another’s attributes. For example, subnet_id = aws_subnet.main.id creates an implicit dependency ensuring the subnet is created before the instance
  • Resource Addressing: Every resource has a unique address in the format provider_type.local_name (e.g., aws_instance.web). With count, it becomes aws_instance.web[0]. With for_each, it becomes aws_instance.web["key"]. These addresses are used in state commands and -target flags
  • Timeouts: Some resources support custom timeout blocks for create, update, and delete operations. Example: timeouts { create = "60m" } gives a long-running resource like an RDS instance more time to provision before Terraform considers it failed
Resource blocks are the most fundamental building block of Terraform. For the exam, understand the difference between in-place updates and destroy/recreate (force replacement). Some attribute changes, like renaming an S3 bucket or changing an EC2 AMI, require destruction and recreation. Terraform shows this in the plan output with -/+. Know that resource addresses uniquely identify each resource in state and are essential for targeted operations.
Meta-Arguments: count, for_each, depends_on, lifecycle

Key Concepts

  • count: Creates multiple instances of a resource using an integer. Access individual instances with [index]. Example: count = 3 creates three identical resources. Use count.index inside the block to differentiate them (e.g., naming). Drawback: removing an item from the middle reindexes all subsequent resources
  • for_each: Creates instances from a map or set of strings. Each instance is identified by its key, so adding or removing items does not affect others. Preferred over count for non-identical resources. Access values with each.key and each.value
  • depends_on: Explicitly declares a dependency when Terraform cannot detect it automatically (e.g., when a dependency is through a side effect like an IAM policy that must exist before an EC2 instance can use it). Takes a list of resource addresses: depends_on = [aws_iam_role_policy.example]
  • lifecycle Block: Customizes resource behavior. create_before_destroy = true creates the replacement before destroying the original (avoids downtime). prevent_destroy = true blocks any plan that would destroy the resource. ignore_changes tells Terraform to ignore external modifications to specified attributes
  • count vs for_each: Use count when resources are nearly identical and differ only by index. Use for_each when each instance has a distinct identity (e.g., a map of subnet CIDRs by AZ). for_each is safer because it uses map keys as identifiers rather than sequential indices
Meta-arguments are heavily tested on the exam. The most common question pattern asks you to choose between count and for_each. Rule of thumb: if you have a list of identical things, count works; if each resource has a unique identity, use for_each. Understand that lifecycle { prevent_destroy = true } does not prevent terraform destroy from working on the whole configuration — it only prevents individual resource destruction during plan/apply. depends_on should be a last resort; prefer implicit dependencies through attribute references.
Data Sources for Reading External Info

Key Concepts

  • Data Source Purpose: Data sources let Terraform read information from existing infrastructure that is not managed by the current configuration. They are read-only — they query the provider API but never create, update, or delete anything
  • Data Source Syntax: data "aws_ami" "latest" { most_recent = true filter { ... } }. Referenced as data.aws_ami.latest.id. Data sources use the data block type instead of resource and typically include filter arguments to narrow results
  • Common Use Cases: Look up the latest AMI ID by filters, retrieve the current AWS account ID or region, query existing VPCs or subnets by tags, read an IAM policy document, or fetch outputs from another Terraform state via terraform_remote_state
  • terraform_remote_state: A special data source that reads output values from another Terraform configuration’s state file. Enables cross-project references: data "terraform_remote_state" "network" { backend = "s3" config = { ... } }. Only exposes output values, not internal resource attributes
  • Data Source Dependencies: Data sources participate in Terraform’s dependency graph. If a data source references a resource attribute, Terraform reads the data source only after the resource is created. This ensures data sources return current information
Data sources are a common exam topic. Remember that data sources are read-only and prefixed with data. in expressions. A frequent exam question involves terraform_remote_state for sharing outputs between configurations — know that it only exposes values declared as outputs in the source configuration. Understand that data sources are refreshed on every plan/apply, making them suitable for dynamic values like the latest AMI ID or current availability zones.
05
Variables & Outputs
3 lessons
Input Variables: Types, Defaults, Validation & Sensitive

Key Concepts

  • Variable Declaration: Defined with variable "name" { type = string default = "value" description = "..." }. Referenced as var.name in expressions. Variables without defaults are required and must be provided at runtime
  • Variable Precedence (lowest to highest): Default value in the variable block, environment variables (TF_VAR_name), terraform.tfvars or terraform.tfvars.json (auto-loaded), *.auto.tfvars files (auto-loaded alphabetically), -var-file flag, -var flag on the command line. Higher precedence overrides lower
  • Type Constraints: Enforce expected types: string, number, bool, list(string), map(number), object({ name = string, age = number }). Terraform validates types at plan time and returns clear error messages for mismatches
  • Validation Rules: Custom validation with validation { condition = length(var.name) > 0 error_message = "Name cannot be empty." }. Multiple validation blocks are allowed. Conditions must return true for the variable to be accepted
  • Sensitive Variables: Mark with sensitive = true to prevent the value from appearing in plan output or CLI logs. The value is still stored in state — protect your state file. Sensitivity propagates to any output that references a sensitive variable
Variable precedence is a high-priority exam topic. Memorize the order: defaults are lowest, command-line -var flags are highest. A common trap: terraform.tfvars is auto-loaded, but a file named custom.tfvars is not — it must be passed with -var-file="custom.tfvars". Files matching *.auto.tfvars are automatically loaded. For sensitive variables, understand that sensitive = true only redacts CLI output; the state file still contains the plaintext value.
Output Values & Using Them Across Modules

Key Concepts

  • Output Declaration: Defined with output "name" { value = aws_instance.web.public_ip description = "..." }. Outputs are displayed after terraform apply and queryable with terraform output or terraform output -json
  • Cross-Module References: Child module outputs are accessed as module.module_name.output_name in the parent configuration. This is the primary mechanism for passing data between modules — a module’s internal resources are not directly accessible from outside
  • Sensitive Outputs: Mark with sensitive = true to suppress the value in CLI output. Required when the output references a sensitive variable or contains secrets like passwords or API keys. The value is still stored in state
  • Output Dependencies: If an output references a resource, Terraform considers that resource a dependency of the output. Outputs with depends_on can declare explicit dependencies when the relationship is not visible through attribute references
  • terraform output Command: Retrieves output values from state without running apply. terraform output vpc_id returns a single value. terraform output -json returns all outputs in JSON format, useful for scripting and integration with other tools
Outputs are the primary way to expose information from a Terraform configuration or module. For the exam, know that root module outputs are displayed after apply and stored in state, while child module outputs are used for cross-module data flow. A frequently tested pattern: one configuration exports a VPC ID via output, and another configuration reads it using terraform_remote_state. Understand that outputs are the only way to access a module’s data from the parent.
Local Values & Expressions

Key Concepts

  • Local Values: Defined with locals { common_tags = { Environment = var.env, Project = var.project } }. Referenced as local.common_tags. Locals act as named expressions that simplify repeated or complex logic, reducing duplication across the configuration
  • Built-in Functions: Terraform provides a rich library of functions: join(","", var.list), lookup(var.map, "key", "default"), length(var.list), merge(map1, map2), file("path"), templatefile("tmpl", vars), cidrsubnet(). Test functions interactively with terraform console
  • Conditional Expressions: condition ? true_val : false_val. Common patterns: count = var.create_resource ? 1 : 0 to conditionally create a resource, or instance_type = var.env == "prod" ? "m5.large" : "t3.micro" to vary configuration by environment
  • For Expressions: Transform collections: [for s in var.list : upper(s)] (list), {for k, v in var.map : k => upper(v)} (map). Add filtering with if: [for s in var.list : s if s != ""]. Powerful for deriving complex data structures from simpler inputs
  • Splat Expressions: Shorthand for accessing attributes across a list of resources: aws_instance.web[*].id returns a list of all instance IDs. Equivalent to [for i in aws_instance.web : i.id]. Works only with list-indexed resources (count), not for_each
Locals, functions, and expressions are tested through code-reading questions. You will not need to memorize every function, but know the most common ones: join, split, lookup, merge, length, file, templatefile, cidrsubnet. Understand that locals are evaluated lazily and can reference variables, resources, and other locals. The terraform console command is invaluable for testing expressions interactively before putting them in configuration files.
06
State Management
3 lessons
Understanding Terraform State

Key Concepts

  • Purpose of State: Terraform state (terraform.tfstate) is a JSON file that maps configuration resources to real-world infrastructure objects. It tracks resource IDs, attributes, and metadata so Terraform knows what exists, what needs updating, and what to destroy
  • State as Source of Truth: Terraform compares the desired state (your .tf files) against the current state (terraform.tfstate) and the real infrastructure (API calls) to compute the execution plan. Without state, Terraform would have no way to know which resources it manages
  • Sensitive Data in State: State files contain sensitive information in plaintext, including database passwords, API keys, and any attribute values from your resources. State files must be encrypted at rest, access-controlled, and never committed to version control
  • State Locking: When using remote backends, Terraform acquires a lock before any write operation (apply, destroy, import) to prevent concurrent modifications. DynamoDB is used for locking with the S3 backend. If a lock is stuck, use terraform force-unlock [LOCK_ID]
  • terraform refresh: Updates the state file to match the real infrastructure without modifying any resources. As of Terraform 0.15+, terraform plan and terraform apply include a refresh step by default, making standalone terraform refresh rarely necessary
State is one of the most important exam topics. Understand that state is not optional — it is required for Terraform to function. The exam tests whether you know that state files contain sensitive data (always protect them), that state locking prevents concurrent corruption, and that the default local state file (terraform.tfstate) is unsuitable for team use because it cannot be shared or locked. This is why remote backends exist.
Remote State Backends

Key Concepts

  • Why Remote Backends: Local state files cannot be shared across a team, do not support locking, and are easily lost. Remote backends store state centrally, enable collaboration, provide locking, and can encrypt state at rest
  • S3 Backend (AWS): The most popular remote backend. Stores state in an S3 bucket with optional server-side encryption (AES-256 or KMS). Pair with a DynamoDB table for state locking and consistency checking. Configure with backend "s3" { bucket = "..." key = "..." region = "..." dynamodb_table = "..." }
  • Azure Blob Storage Backend: Stores state in an Azure Storage Account blob container. Supports locking via Azure Blob leases. Configure with backend "azurerm" { resource_group_name = "..." storage_account_name = "..." container_name = "..." key = "..." }
  • Terraform Cloud Backend: HashiCorp’s managed backend offering. Provides state storage, locking, encryption, versioning, access control, and a full history of state changes. Configure with cloud { organization = "..." workspaces { name = "..." } }
  • Backend Migration: Changing backends requires terraform init -migrate-state, which copies existing state to the new backend. Terraform prompts for confirmation before migrating. Always verify the state after migration with terraform plan to ensure no unexpected changes
Remote backends are heavily tested. Know that the S3 + DynamoDB combination provides both remote storage and locking for AWS environments. The exam may ask what happens when you change the backend configuration — you must run terraform init again, and Terraform offers to migrate the existing state. The cloud block (for Terraform Cloud) is the successor to the remote backend and is the recommended approach for teams using HashiCorp’s platform.
State Commands: list, show, mv, rm, import

Key Concepts

  • terraform state list: Lists all resources tracked in the state file. Useful for inventory and verifying that expected resources exist. Supports filtering by address prefix: terraform state list module.vpc
  • terraform state show: Displays the detailed attributes of a single resource in state. Example: terraform state show aws_instance.web shows the instance ID, AMI, public IP, tags, and all other tracked attributes. Essential for debugging
  • terraform state mv: Moves a resource in state without destroying and recreating it. Used for renaming resources (terraform state mv aws_instance.old aws_instance.new) or moving resources into or out of modules. The actual infrastructure is not affected
  • terraform state rm: Removes a resource from state without destroying the real infrastructure. The resource still exists in the cloud but Terraform no longer manages it. Used when transferring ownership to another configuration or manual management
  • terraform import: Brings existing infrastructure under Terraform management by writing a resource entry into state. Requires a corresponding resource block in configuration. Syntax: terraform import aws_instance.web i-1234567890abcdef0. Does not generate configuration automatically (use import blocks in Terraform 1.5+ for a code-generation workflow)
State manipulation commands are tested frequently. The key distinction: state mv renames or reorganizes resources within state without affecting infrastructure, state rm detaches a resource from Terraform management without destroying it, and import adopts existing infrastructure into Terraform. Remember that terraform import only updates state — you must still write the resource block manually. In Terraform 1.5+, the import block in configuration can generate the corresponding resource code with terraform plan -generate-config-out=generated.tf.
07
Modules
3 lessons
Module Structure & Sources

Key Concepts

  • What Are Modules: Modules are reusable, self-contained packages of Terraform configuration. Every Terraform configuration is technically a module — the working directory is the “root module.” Child modules are called from the root using module blocks
  • Module Structure: A minimal module contains main.tf, variables.tf, and outputs.tf. Modules should encapsulate related resources (e.g., a VPC module that creates the VPC, subnets, route tables, and NAT gateways together)
  • Local Modules: Referenced by a relative file path: module "vpc" { source = "./modules/vpc" }. Changes to local modules take effect immediately on the next terraform init or plan. Best for organization-specific modules within the same repository
  • Registry Modules: Published on registry.terraform.io and referenced by a short address: module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "5.0.0" }. The Terraform Registry provides documentation, usage examples, and input/output descriptions
  • Git & Other Sources: Modules can be sourced from Git repositories (source = "git::https://github.com/org/repo.git?ref=v1.0"), S3 buckets, GCS buckets, or HTTP URLs. Git sources support branch, tag, and commit references for precise version control
The exam tests your understanding of module sources and when to use each type. Know that terraform init downloads remote modules into the .terraform/modules directory. Local modules are not copied — they are read directly from the file path. Registry modules must include a version constraint; Git modules use ?ref= for pinning. The standard module structure (main.tf, variables.tf, outputs.tf, README.md) is a best practice, not a requirement.
Input & Output Variables in Modules

Key Concepts

  • Module Inputs: Variables declared inside a module act as its input interface. The calling (parent) module passes values as arguments: module "vpc" { source = "./modules/vpc" cidr_block = "10.0.0.0/16" }. Required variables without defaults must always be provided
  • Module Outputs: Outputs declared inside a module expose data to the caller. The parent accesses them as module.vpc.vpc_id. Only explicitly declared outputs are visible — internal resource attributes are encapsulated within the module
  • Module Encapsulation: A module’s internal resources, variables, and locals are not directly accessible from outside. This enforces clean interfaces: inputs go in via variables, outputs come out via output blocks. Encapsulation enables safe reuse without understanding implementation details
  • Passing Outputs Between Modules: Common pattern: Module A outputs a VPC ID, and Module B takes it as an input: module "web" { vpc_id = module.vpc.vpc_id }. Terraform automatically handles the dependency ordering between modules
  • Variable Validation in Modules: Module authors should include validation blocks in variables to enforce constraints (e.g., CIDR format, string length, allowed values). This provides clear error messages to module consumers and prevents misconfiguration
Module inputs and outputs are a core exam topic. Think of modules like functions in programming: inputs are parameters, outputs are return values, and internal implementation is private. The exam may show a module block and ask what value a specific argument maps to or how to reference an output. Remember that module authors control what is exposed through outputs — not everything inside the module is available to the caller.
Module Versioning & Best Practices

Key Concepts

  • Module Versioning: Registry modules support the version argument with the same constraint syntax as providers: version = "~> 5.0". Always pin module versions to prevent unexpected breaking changes. Local modules are versioned through your repository’s version control
  • Semantic Versioning: Module versions follow semver (MAJOR.MINOR.PATCH). A MAJOR bump may include breaking changes, MINOR adds features backward-compatibly, PATCH fixes bugs. The ~> constraint is ideal for allowing safe updates: ~> 5.1 allows 5.1.x through 5.x but not 6.0
  • Module Composition: Build complex infrastructure by composing smaller modules. A root module might call a VPC module, a security group module, and an EC2 module, wiring their outputs and inputs together. Favor shallow module nesting — deeply nested modules are hard to debug
  • DRY Principle: Do not Repeat Yourself. If you copy-paste the same resource blocks across multiple configurations, extract them into a module. Modules reduce duplication, enforce standards, and make updates easier (change the module once, update everywhere)
  • Module Documentation: Every module should include a README with usage examples, a description of each input variable (including type, default, required), and a list of outputs. The Terraform Registry auto-generates documentation from variable and output descriptions
For the exam, understand that module versioning is critical for production stability. Never use unversioned registry modules in production — always pin with a version constraint. Know the benefits of modules: reusability (use the same VPC module for dev and prod), standardization (enforce naming conventions and tagging), and maintainability (update in one place). The exam may ask why you would use a module versus inline resources — the answer centers on reuse and encapsulation.
08
Terraform Cloud & Workspaces
2 lessons
Workspaces: CLI vs Terraform Cloud

Key Concepts

  • CLI Workspaces: Terraform CLI supports multiple workspaces (terraform workspace new dev, terraform workspace select prod) that maintain separate state files within the same configuration. The current workspace is available via terraform.workspace for conditional logic
  • CLI Workspace State: Each workspace stores its state in a separate file under terraform.tfstate.d/[workspace_name]/. The default workspace uses terraform.tfstate in the root. CLI workspaces are lightweight but lack access control and audit logging
  • Terraform Cloud Workspaces: Fundamentally different from CLI workspaces. Each Terraform Cloud workspace is a full environment with its own state, variables, credentials, run history, and access controls. They are the primary organizational unit in Terraform Cloud
  • Key Differences: CLI workspaces share the same configuration and variables — only state differs. Terraform Cloud workspaces can have different variable values, VCS connections, provider credentials, and team access policies. They are designed for real multi-environment workflows
  • When to Use Which: CLI workspaces are fine for local testing with minor environment differences. Terraform Cloud workspaces are essential for production team workflows where you need access control, run approval, state versioning, and integration with VCS
The exam specifically tests the difference between CLI workspaces and Terraform Cloud workspaces — they are not the same concept. CLI workspaces only separate state; Terraform Cloud workspaces separate state, variables, permissions, and more. A common exam trap: CLI workspaces are not recommended for managing different environments (dev/staging/prod) in production because they share the same variable values and backend credentials. Use separate directories or Terraform Cloud workspaces for true environment isolation.
Terraform Cloud Features

Key Concepts

  • Remote Runs: Terraform Cloud executes terraform plan and apply in a managed environment, ensuring consistent execution regardless of who triggers the run. Plans can be reviewed and approved before apply, adding a human gate for production changes
  • VCS Integration: Connect a workspace to a GitHub, GitLab, or Bitbucket repository. Terraform Cloud automatically triggers a plan on pull requests and applies changes when merged to the default branch. This creates a full GitOps workflow for infrastructure
  • Sentinel Policy as Code: Sentinel is HashiCorp’s policy-as-code framework that enforces governance rules. Policies run between plan and apply, blocking non-compliant changes. Examples: require all S3 buckets have encryption enabled, restrict instance types to approved list, mandate tagging on all resources
  • Private Module Registry: Organizations can publish internal modules to Terraform Cloud’s private registry, enabling standardized, versioned module sharing across teams. Modules are published from VCS repositories and support the same versioning as the public registry
  • State Management in TFC: Terraform Cloud stores state securely with encryption at rest, versioning (every state change is saved), and access control (only authorized users/teams can read or modify state). State rollback is possible by restoring a previous version
Terraform Cloud features are tested at an awareness level. Know that Sentinel policies run between plan and apply and can enforce mandatory, soft-mandatory, or advisory rules. Understand the VCS-driven workflow: commit triggers plan, merge triggers apply. The exam may ask about the execution modes: remote (plan and apply run in TFC), local (plan and apply run on your machine, state in TFC), and agent (plan and apply run on a self-hosted agent for private network access). The free tier of Terraform Cloud supports up to 500 managed resources.
09
Advanced Features
2 lessons
Provisioners & When to Use Them

Key Concepts

  • What Are Provisioners: Provisioners execute scripts or commands on a local or remote machine as part of resource creation or destruction. They are a bridge between infrastructure provisioning and configuration management, but they break Terraform’s declarative model
  • local-exec: Runs a command on the machine where Terraform is executing. Example: provisioner "local-exec" { command = "echo ${self.private_ip} >> hosts.txt" }. Useful for triggering external scripts, notifying APIs, or generating local files after resource creation
  • remote-exec: Connects to the remote resource (via SSH or WinRM) and runs commands directly on it. Requires a connection block with host, user, and authentication details. Used for bootstrapping software or running initial configuration scripts
  • Why They Are a Last Resort: Provisioners are not tracked in state, not idempotent by default, and create tight coupling between Terraform and runtime configuration. If a provisioner fails, the resource is marked as tainted. HashiCorp recommends using cloud-init, user data, Packer images, or configuration management tools (Ansible, Chef) instead
  • Creation-Time vs Destroy-Time: Provisioners run at creation by default. Add when = destroy to run during resource destruction (e.g., deregistering from a load balancer). Destroy-time provisioners must be self-contained — they cannot reference other resources that may already be destroyed
The exam asks specifically about when provisioners are appropriate. The official answer: provisioners are a last resort. Prefer cloud-native mechanisms like user data scripts (AWS), custom images built with Packer, or configuration management tools triggered separately. If you must use a provisioner, understand that failure marks the resource as tainted, meaning it will be destroyed and recreated on the next apply. The on_failure argument can be set to continue to ignore provisioner failures.
Dynamic Blocks, Type Constraints & Utility Commands

Key Concepts

  • Dynamic Blocks: Generate repeated nested blocks from a collection. Syntax: dynamic "ingress" { for_each = var.rules content { from_port = ingress.value.from ... } }. Useful for security group rules, tags, or any block that needs to be repeated a variable number of times. Avoid overuse — they reduce readability
  • Type Constraints & Structural Types: object({ name = string, port = number }) defines a structured type with named attributes. tuple([string, number]) defines an ordered mixed-type list. The any keyword allows Terraform to infer the type. optional() marks object attributes as optional with default values in Terraform 1.3+
  • terraform fmt: Automatically formats .tf files to the canonical style (consistent indentation, aligned equals signs, sorted arguments). Run terraform fmt -check in CI to enforce formatting without modifying files. Non-zero exit code means files need formatting
  • terraform validate: Checks configuration for syntax errors, type mismatches, and missing required arguments without accessing any remote state or provider APIs. Faster than plan because it works entirely offline. Use it as a pre-commit hook or CI gate
  • terraform taint & replace: terraform taint (deprecated) marks a resource for recreation on the next apply. Replaced by terraform apply -replace="aws_instance.web" in Terraform 0.15.2+. Use when a resource is in a bad state and needs to be rebuilt from scratch without changing configuration
Dynamic blocks, fmt, and validate are common exam topics. Know that dynamic blocks generate nested blocks (not top-level resource blocks) and are the only way to make the number of nested blocks variable. For terraform fmt, remember it modifies files in-place by default; use -check for CI validation. terraform validate runs after init (it needs provider schemas) but does not access remote APIs. The -replace flag has superseded taint — know both but understand that -replace is the current recommended approach.
Start practicing →