← atharux.com / lab
Interactive Reference · Claude API
// Prompting Patterns

Three techniques that
actually matter

ReAct, Function Calling, and Context Engineering — explained with live demos powered by the Anthropic API. Run real examples in the browser.

ReAct Prompting

ReAct = Reasoning + Acting. Instead of asking a model to answer immediately, you instruct it to alternate between thinking out loud and taking actions — then observe results before continuing.

This externalises the reasoning chain. Errors become visible. Each step is checkable. Hallucinations drop because the model is forced to justify every move.

Multi-step reasoning Tool use Debugging
Thought
reason
Action
do something
Observation
see result
Thought
reason again
Final Answer
conclude
Prompt
You are a reasoning agent. For every question, follow this loop: Thought: [what you need to figure out next] Action: [one of: Search[query], Calculate[expr], Lookup[term]] Observation: [result of that action] Repeat the Thought/Action/Observation loop until you can answer. Then write: Final Answer: [your conclusion] Never skip steps. Never guess. Always show your work.
ReAct Loop — Live API Call
Enter a question and click Run to see a ReAct reasoning loop...
Key insight: The model doesn't need real tools for ReAct to be useful. Simply structuring output as Thought/Action/Observation forces deliberate step-by-step reasoning — even when "actions" are just internal calculations.

Function Calling

You give the model a typed interface to the real world. Instead of returning free-text, the model returns a structured JSON payload that your code executes. The model doesn't run the function — you do. It just knows how to ask for it correctly.

This is the architecture behind every serious AI integration: job trackers, search tools, data lookups, form submissions.

Structured output API integration Tool use

① Define the schema

You describe available functions in JSON: name, description, parameters, types, required fields. The model reads this like documentation.

② Model decides when to call

You don't tell it when. Given the right context, it will return a structured call object instead of prose — because it knows a tool is available.

③ You execute & return result

Your code runs the function, gets real data, feeds the result back. The model then generates a natural language response informed by live data.

The model has no hands

This is the critical mental model: the AI only ever produces text. "Function calling" means it produces specially-formatted text that your code intercepts and acts on. You are the executor.

JSON · Tool Schema
{ "name": "get_weather", "description": "Get current weather for a city", "input_schema": { "type": "object", "properties": { "city": { "type": "string", "description": "City name" }, "units": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["city"] } }
Function Calling Pipeline — Live Simulation
Key insight: Function calling is how you ground AI in reality. The model provides intent and structure; your code provides execution and real data. Always validate the model's arguments before executing.

Context Engineering

What you put in the context window, in what order, and how much of it — models are extremely sensitive to all three. This is the highest-leverage skill because it affects every single prompt you write.

Most prompt failures aren't model failures. They're context failures: too much noise, wrong order, missing examples, or competing instructions.

Highest leverage Token efficiency Output quality

A · Primacy & Recency

Models weight the start and end of context most. Put system instructions at the top. Put the actual task at the bottom. Background goes in the middle.

B · Compression

Every token competes. Verbose context buries signal in noise. Ruthlessly remove filler. 10 precise words beat 100 vague ones.

C · Few-Shot Examples

One well-chosen input→output example beats three paragraphs of instruction. Show the model exactly what you want rather than describing it.

D · Role + Constraint Framing

Telling the model who it is AND what it must not do is more reliable than telling it what to do. Negative constraints are powerful.

Prompt Structure
─── TOP (Primacy — highest attention) ─────────────── SYSTEM: Role definition + hard constraints + persona ─── MIDDLE (Lower attention — use for reference) ──── CONTEXT: Background info, examples, prior conversation ─── BOTTOM (Recency — highest attention) ──────────── USER: The actual task / question you want answered

See how different context quality produces different outputs for the same underlying task.

Context Quality A/B — Live API Calls
❌ Weak Context
Select a scenario above
Output will appear here...
✓ Strong Context
Select a scenario above
Output will appear here...
Key insight: Your AI Execution Contract (the document governing how an AI assistant works with you) is context engineering applied as a meta-prompt. It defines role, constraints, and output format at the top — exactly where it has maximum effect.