Why AWS created an AI-specific certification
When AWS released the AI Practitioner in late 2024, it quietly created a new mandatory rung on the cloud credential ladder. The Cloud Practitioner proves you can reason about infrastructure: availability zones, cost models, the shared responsibility model. The AI Practitioner proves something different — that you understand the layer sitting above the infrastructure. Foundation models. Generative AI pipelines. The responsible deployment of systems that produce non-deterministic outputs. In a hiring market where “AI experience” appears on virtually every job description but rarely comes with a definition, this cert converts vague familiarity into a verifiable credential.
The exam runs 90 minutes, costs $100 USD, and contains 65 scored questions. AWS recommends six months of cloud experience and some prior exposure to AI and ML concepts. The passing score is 700 out of 1,000 — lower than most AWS associate exams, reflecting its foundational positioning. Candidates already holding Cloud Practitioner typically need four to six weeks of light study to be ready.
The AIF-C01 is not a developer exam. You will not write code or deploy models under exam conditions. Questions are scenario-based: given a business requirement or an architectural constraint, which AWS service, technique, or design principle applies? Think of it as the Cloud Practitioner model applied to the AI layer — breadth over depth, judgment over implementation detail.
The five exam domains
Domain 1 — Fundamentals of AI and ML (~20%)
The conceptual foundation. You need a clear mental model of supervised, unsupervised, and reinforcement learning; how training and validation work; and common failure modes like overfitting, underfitting, and data leakage. No mathematics is required.
- Supervised learning: Labeled data, prediction tasks — classification (spam detection) and regression (price prediction).
- Unsupervised learning: Unlabeled data, pattern discovery — clustering (customer segmentation) and dimensionality reduction.
- Reinforcement learning: An agent learns from reward signals. Used in robotics, game-playing, and recommendation tuning.
- Model evaluation: Know accuracy, precision, recall, F1, and AUC-ROC conceptually. Know what a training/validation loss curve looks like when a model is overfitting.
- Generative AI vs discriminative models: Discriminative models predict labels. Generative models produce new content — text, images, audio — that resembles the training distribution.
Domain 2 — Fundamentals of Generative AI (~24%)
The largest domain by weight. Covers what a foundation model (FM) is, how large language models generate text token by token, and the practical adaptation techniques available without full retraining:
- Prompt engineering: Zero-shot (no examples in the prompt), few-shot (a handful of labeled examples), and chain-of-thought (instructing the model to reason step by step before answering). The exam presents a scenario and asks which prompting approach best fits the task.
- Fine-tuning: Updates the base FM’s weights using domain-specific labeled data. Produces a model that consistently follows a vocabulary or style. Requires training data and compute; not easily reversible.
- Retrieval-Augmented Generation (RAG): Leaves the base model unchanged. At inference time, relevant documents are fetched from an external knowledge base and injected into the prompt context. The model synthesises a grounded answer. Preferred when your knowledge base changes frequently, because no retraining is required.
- Hallucination: LLMs generate plausible-sounding but factually incorrect statements. RAG mitigates this by anchoring the model’s response in verified retrieved sources. The exam will ask which architecture best addresses hallucination risk in a given scenario.
- Inference parameters: Temperature controls output randomness — lower values produce more deterministic responses. Top-P and top-K limit the token sampling space. Know what adjusting these does for creative tasks vs factual Q&A.
Domain 3 — AWS AI/ML Services (~28%)
The most AWS-specific domain. The anchor service is Amazon Bedrock — a managed API giving you access to foundation models from Anthropic (Claude), Meta (Llama), Mistral, Cohere, and Amazon’s own Titan family, with no infrastructure to manage. Bedrock also offers Agents (multi-step orchestration with tool use), Knowledge Bases (managed RAG with vector store integration), Guardrails (content filtering and PII redaction), and Model Evaluation.
- Amazon SageMaker: The full ML lifecycle platform — data labeling, training, hyperparameter tuning, model registry, deployment, and monitoring. Choose SageMaker when the scenario requires training a proprietary model on your own dataset.
- Amazon Q: An enterprise AI assistant pre-integrated with common business tools (Jira, Confluence, S3, ServiceNow). Lets employees query internal knowledge bases in natural language without prompt engineering.
- Purpose-built AI APIs: Rekognition (image and video analysis — object detection, facial comparison, content moderation), Transcribe (speech-to-text with speaker diarisation), Comprehend (NLP — sentiment analysis, entity extraction, key phrases), Textract (structured data extraction from documents and forms), Kendra (intelligent enterprise search), Translate, Polly (text-to-speech), Lex (conversational chatbot).
- Service selection logic: Bedrock = use an existing FM via API. SageMaker = train your own model. Rekognition = images or video. Comprehend = text analysis. Textract = extract fields from invoices, forms, or scanned documents. This decision logic covers the majority of Domain 3 scenario questions.
Domain 4 — Guidelines for Responsible AI (~14%)
Covers the governance side of AI: fairness, explainability, accountability, and the AWS Responsible AI framework. Expect scenario questions asking you to identify which service, configuration, or design practice addresses a specific governance requirement.
- Bias: Can enter at data collection (historical bias), labeling (annotator bias), or model training (representation bias). Amazon SageMaker Clarify detects feature importance and bias metrics before and after training.
- Explainability: The ability to articulate why a model made a specific prediction. SageMaker Clarify surfaces SHAP-based feature attribution at the instance level.
- Human-in-the-loop: For high-stakes decisions (medical, financial, legal), a human reviews borderline model outputs before action is taken. Amazon Augmented AI (A2I) provides the workflow infrastructure.
- Model cards: Documentation recording intended use, evaluation results, limitations, and ethical considerations. Increasingly required for regulated industries and enterprise AI procurement.
Domain 5 — Security, Compliance, and Governance for AI (~14%)
Standard AWS security controls applied to AI workloads. The shared responsibility model still governs: AWS secures the underlying infrastructure; you secure your data, your IAM policies, and your model configuration.
- IAM for AI services: Bedrock, SageMaker, and Comprehend all use IAM roles. Least-privilege applies — a Lambda calling Bedrock should hold only
bedrock:InvokeModelfor the specific model ARN it needs, not a wildcard. - Data encryption: Training data in S3 should be encrypted at rest (SSE-S3 or SSE-KMS). SageMaker training jobs and Bedrock Knowledge Bases support KMS encryption. All AWS AI APIs use TLS in transit.
- VPC endpoints: Bedrock and SageMaker support VPC interface endpoints (PrivateLink), keeping model inference traffic off the public internet — required for compliance frameworks that prohibit data leaving a private network boundary.
- Bedrock Guardrails: Managed content filters that block harmful outputs (hate speech, violence, PII leakage) before they reach the user. Configurable per use case. The exam may ask which Bedrock feature prevents a customer-facing chatbot from leaking PII extracted from retrieved documents.
- Audit and monitoring: AWS CloudTrail logs every Bedrock and SageMaker API call. SageMaker Model Monitor detects data drift and quality degradation in deployed endpoints over time.
If a scenario question stumps you, fall back to AWS Well-Architected principles — particularly the Security and Sustainability pillars — and ask which option minimises risk while providing the narrowest appropriate access. That heuristic solves more AIF-C01 questions than you’d expect.
How AIF-C01 fits into your cert roadmap
AWS positions the AI Practitioner as a peer to Cloud Practitioner, not a prerequisite for associate exams. In practice the domains overlap significantly with content appearing in updated AWS associate question pools:
- AWS Solutions Architect Associate (SAA-C03): Now includes scenario questions on Bedrock architectures, SageMaker endpoint placement in VPCs, and cost optimisation for ML workloads (SageMaker Savings Plans, Spot training jobs).
- AWS Developer Associate (DVA-C02): Covers Bedrock API integration patterns, streaming inference responses, and handling model invocation errors from Lambda.
- AWS Machine Learning Specialty (MLS-C01): The natural next step for candidates who want depth on model training, feature engineering, and MLOps. AIF-C01 provides an efficient on-ramp to the Specialty vocabulary.
For candidates whose primary goal is a Solutions Architect or DevOps Engineer certification, sitting AIF-C01 first adds roughly two to three weeks to preparation time but significantly reduces the surprise factor when AI-related questions appear in the associate exam — a worthwhile trade.
Generative AI has become a live exam topic across the AWS certification portfolio. The AIF-C01 gives you the vocabulary and service knowledge to handle those questions confidently — and, more importantly, it signals to employers that you can operate in an AI-augmented cloud environment. With Amazon Bedrock adoption accelerating across enterprise AWS accounts, knowing how to scope IAM policies for FM invocation, configure Guardrails, and choose between RAG and fine-tuning for a specific use case is no longer specialist knowledge. It is the new baseline for cloud professionals. Source: AWS Machine Learning Blog.
Ready to test your AWS knowledge? Practice with free scenario-based questions covering Bedrock, IAM, SageMaker, and cloud architecture — timed, randomised, and no signup required.
Start AWS Practice Questions →