Chapter 2: The Raw Request
In Chapter 1, we built the body. Now we need to wake up the brain.
Most tutorials will tell you to pip install anthropic. We are not going to do that.
Why? Because SDKs hide the truth. They add layers of abstraction that make “Hello World” easy but make debugging “Error 400” a nightmare. When you use an SDK, you are learning a library. When you use raw HTTP, you are learning the protocol.
In this chapter, we will send a message to Claude using nothing but the requests library. Claude is Anthropic’s flagship LLM—one of the most capable models for coding tasks.
Step 1: Get an API Key
To talk to Claude, you need an API Key. This is a long string of characters that acts as your credit card.
- Go to the Anthropic Console.1
- Sign up and add a payment method (minimum $5 credit).
- Create a new API Key and name it
nanocode. - Copy the key (it starts with
sk-ant-...).
![]() |
Warning: Treat this key like a password. Anyone with it can spend your money. |
Step 2: The Vault (.env)
We need a safe place to store this key. We never put keys directly in code.
Create a file named .env in your project root:
1 touch .env
Open it and paste your key:
1 ANTHROPIC_API_KEY=sk-ant-api03-...
We installed python-dotenv in Chapter 1 for exactly this purpose—it reads .env and loads the values into os.environ.
Step 3: The Anatomy of a Request
To talk to an LLM, we send an HTTP POST request to:
https://api.anthropic.com/v1/messages
This request needs three things:
- Authentication (Headers): “Here is my ID card.”
- Configuration (Body): “I want to talk to model X with max Y tokens.”
- Message (Body): “Hello!”
The Headers
Anthropic requires three headers:
| Header | Value | Purpose |
|---|---|---|
x-api-key |
Your secret key | Authentication |
anthropic-version |
2023-06-01 |
API version |
content-type |
application/json |
Format |
The Payload
The “Messages API” expects a list of message dictionaries:
1 "messages": [
2 {"role": "user", "content": "Hello, world!"}
3 ]
Each message has a role (either "user" or "assistant") and content (the text).
Step 4: The Code
Create a file called test_api.py. This is a “smoke test” to prove our connection works. We will delete it later.
The Context: We are writing linear, procedural code. No functions, no classes. We want to see the bare metal.
The Code:
1 import os
2 import requests
3 import json
4 from dotenv import load_dotenv
5
6 # 1. Load the vault
7 load_dotenv()
8 api_key = os.getenv("ANTHROPIC_API_KEY")
9
10 # Basic check so we don't crash with a confusing "NoneType" error later
11 if not api_key:
12 print("Error: ANTHROPIC_API_KEY not found in .env")
13 exit(1)
14
15 # 2. Define the target
16 url = "https://api.anthropic.com/v1/messages"
17
18 # 3. Authenticate
19 headers = {
20 "x-api-key": api_key,
21 "anthropic-version": "2023-06-01",
22 "content-type": "application/json"
23 }
24
25 # 4. Construct the payload
26 payload = {
27 "model": "claude-sonnet-4-5-20250929",
28 "max_tokens": 4096,
29 "messages": [
30 {"role": "user", "content": "Hello, are you ready to code?"}
31 ]
32 }
33
34 # 5. Fire! (No safety net)
35 print("📡 Sending request to Claude...")
36 response = requests.post(url, headers=headers, json=payload)
37
38 # 6. Inspect the raw result
39 print(f"Status: {response.status_code}")
40
41 if response.status_code == 200:
42 # Success: Print the beautiful JSON
43 print("Response:")
44 print(json.dumps(response.json(), indent=2))
45 else:
46 # Failure: Print the ugly raw text so we can debug
47 print("Error:", response.text)
The Walkthrough:
- Line 7:
load_dotenv()finds the.envfile and loads variables intoos.environ. - Line 8: We grab the API key. Never hardcode this.
- Lines 11-13: Basic sanity check. Without this, a missing key causes a confusing
NoneTypeerror in the headers dict. - Line 21: The
anthropic-versionheader is mandatory. Omit it, and the API rejects you. - Line 27:
claude-sonnet-4-5-20250929specifies which model we want. - Line 28:
max_tokensis required. It caps the response length and prevents runaway costs. - Line 36: We fire the request. No try/except—if the network is down, let Python crash. You need to see where it fails.
- Lines 41-47: Check the status code. 200 means success (pretty-print the JSON). Anything else means we print the raw error text for debugging.
Step 5: Run It
1 python test_api.py
If everything works, you should see:
Status: 200
Response:
{
"id": "msg_01...",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Hello! Yes, I'm ready to code..."
}
],
"stop_reason": "end_turn",
"usage": {
"input_tokens": 15,
"output_tokens": 81
}
}
Troubleshooting
| Error | Cause | Fix |
|---|---|---|
| 401 Unauthorized | Bad API key | Check .env is loading. Print os.environ.get("ANTHROPIC_API_KEY") to verify. |
| 400 Bad Request | Malformed JSON | Did you forget max_tokens? Is messages a list? |
| 429 Rate Limit | Too many requests or no credits | Wait, or add credits to your account. |
Cleaning Up
We proved we can talk to the brain. Delete test_api.py—we will integrate this logic into nanocode.py properly in the next chapter.
![]() |
Note: This |
![]() |
Aside: To monitor your spending, check the Usage tab in the Anthropic Console. A typical coding session with 20-30 exchanges costs $0.10-$0.50 with Claude Sonnet. The |
Wrapping Up
In this chapter, you made your first direct connection to an LLM—no SDK, no magic, just a raw HTTP POST request. You learned the anatomy of an API call: headers for authentication, a payload with your message, and JSON parsing to extract the response.
You also learned why we avoid SDKs: they hide the truth. When you work with raw HTTP, you understand the protocol. When something breaks, you know exactly where to look.
In the next chapter, we’ll give Claude memory by implementing the context loop—the technique that makes LLMs appear to remember previous messages.
https://console.anthropic.com/settings/keys↩︎

