Claude API Foundations Quiz: Test Your Knowledge

You have covered a lot of ground in the first two modules of this series. You know what Anthropic is and why it was founded. You have compared Claude to ChatGPT and Gemini. You understand the model family, the Claude.ai interface, API setup, the Messages API structure, and how pricing works.
Before we move into prompt engineering — where you start shaping Claude's behaviour with precision — this is a good moment to pause and check how much of that foundation has actually stuck.
What This Quiz Covers
This Claude API foundations quiz tests 12 essential concepts: Anthropic's mission and Constitutional AI, Claude model selection criteria, context window sizes, API statelessness, message role rules, system prompts, stop reasons, token pricing structure, prompt caching savings, the Batch API, and API key security. Each question includes a full explanation so you can review any gaps before moving forward.
This knowledge check covers 12 questions across all topics from Modules 1 and 2. After each question, a full explanation is provided. If you get something wrong, go back to the relevant post and re-read the section before continuing.
How to Use This Quiz
Work through each question and commit to your answer before reading the explanation. Do not scroll ahead. The value of a knowledge check comes from the mental effort of retrieval — forcing yourself to recall information, not just recognising a correct answer when you see it.
If you answer 10 or more questions correctly, you are ready to move to Module 3.
If you answer fewer than 8 correctly, revisit the posts that cover the topics you missed before continuing.
Question 1: Anthropic's Founding
Why did the founders of Anthropic leave OpenAI to start a new company?
Question 2: Constitutional AI
What is Constitutional AI and what problem does it solve?
Question 3: Claude Model Selection
You are building a real-time customer support chatbot that will handle thousands of conversations per day. Users expect responses within 2 seconds. Which Claude model is the best starting point?
Question 4: Context Windows
What is the context window of Claude Opus 4.6 and Claude Haiku 4.5 respectively?
Question 5: API Statelessness
The Claude Messages API is stateless. What does this mean for your application?
Question 6: Message Role Rules
You attempt to send the following messages array to the Claude API. What will happen?
Question 7: System Prompts
Where should you put persistent behavioural instructions that should apply to every turn of a conversation?
Question 8: Stop Reasons
Your API response has stop_reason set to 'max_tokens'. What does this mean and what should you do?
Question 9: Token Pricing Structure
Why do output tokens cost more than input tokens in the Claude API?
Question 10: Prompt Caching Savings
You have a 5,000-token system prompt and make 50,000 API calls per month using Claude Sonnet 4.6. Approximately how much would prompt caching save you per month on the system prompt alone?
Question 11: Batch API
When is the Message Batches API the right choice for your workload?
Question 12: API Key Security
What is the most important rule for storing and using your Anthropic API key?
How Did You Do?
- 12/12 — Outstanding: You have a thorough understanding of the foundations. Proceed to Module 3 with confidence.
- 10–11 — Strong: You are well prepared. Quickly review any questions you missed and move on.
- 8–9 — Good: Solid understanding with a couple of gaps. Re-read the posts covering the topics you missed before continuing.
- Below 8 — Review Needed: Go back through Module 1 and 2 posts before proceeding. The concepts in Modules 3 and 4 build directly on these foundations.
Summary
This knowledge check covered the core concepts from the first two modules: Anthropic's mission and Constitutional AI, the Claude model family, Claude.ai, API setup and security, the Messages API structure, and pricing. These are not just theoretical facts — they are the working knowledge that underpins every decision you make when building with Claude.
Now it is time to move from setup and configuration to the craft of working with Claude effectively. Module 3 starts with one of the most valuable skills in AI development: prompt engineering.
In our next post, we start from zero: Prompt Engineering for Claude: The Complete Beginner's Guide.
If you need to revisit any of the underlying concepts, these posts have the detail: Claude model family guide for model selection, Claude API pricing and tokens explained for cost structure, and Claude Messages API explained for the messages array and role rules.
The official Claude models page is the authoritative source for current context window sizes and pricing, which can change as Anthropic releases new model versions.
This post is part of the Anthropic AI Tutorial Series. Don't forget to check out our previous post: Claude API Pricing Explained: Tokens, Cost Tiers, and Batch Savings.
External references:
Frequently Asked Questions
Q: What are the key concepts tested in a Claude API foundations quiz? A foundations quiz typically covers: the Messages API request structure (model, messages array, system prompt, max_tokens), response object fields (content blocks, stop reason, usage), token counting and cost estimation, the difference between user and assistant roles, how to pass multi-turn conversation history, and basic tool use setup. Understanding these concepts is essential before building production applications.
Q: What is a "stop reason" in the Claude API response and what values can it have?
The stop_reason field in the API response explains why Claude stopped generating. Common values: end_turn — Claude naturally finished its response; max_tokens — the max_tokens limit was reached (response may be incomplete); tool_use — Claude is requesting a tool call and you should execute it; stop_sequence — a custom stop sequence you defined was encountered. Always check stop_reason in agentic loops to handle tool use correctly.
Q: How does the Claude API handle multi-turn conversations?
You maintain conversation history yourself — the API is stateless. Pass the full messages array with alternating user and assistant roles on every request. Each assistant response you receive should be appended to the array as an assistant message before the next user turn. For long conversations, implement a summarisation or truncation strategy to stay within the context window limit.
Part of the Claude AI Masterclass.
