LangGraph in Practice — Reflection Agents and Planning Patterns
Auto-validate agent outputs with Self-Critique and systematically decompose complex tasks with Planning patterns. Implementing Generator-Reflector architecture and Plan-and-Execute.

LangGraph in Practice — Reflection Agent and Planning Patterns
The ReAct Agent we built in Part 1 has one critical weakness: it doesn't know when it's wrong. Even if it answers "Seoul's population is 50 million," it remains fully confident. The Reflection pattern gives agents the ability to self-verify. And the Planning pattern gives them the ability to systematically decompose complex tasks.
Series: Part 1: ReAct Pattern | Part 2 (this post) | Part 3: MCP + Multi-Agent | Part 4: Production Deployment
Self-Critique: How Agents Verify Their Own Output
People revise their writing after a first draft. A first draft is rarely perfect. The same goes for LLM Agents. Expecting a perfect answer in one shot is unrealistic — building a loop where the agent verifies and improves its own output leads to markedly better quality.
Related Posts

Build Your Own autoresearch — Applying Autonomous Experimentation to Any Domain
Apply the autoresearch pattern to text classification, image classification, and RAG pipelines. Includes a universal experiment runner and program.md template.

Running autoresearch Hands-On — Overnight Experiments on a Single GPU
From environment setup to agent execution and overnight results analysis. Tuning guide for smaller GPUs and practical tips.

Inside Karpathy's autoresearch — Building an AI Research Lab in 630 Lines
A code-level deep dive into Karpathy's autoresearch. Dissecting train.py, BPE tokenizer, MuonAdamW optimizer, and the agent protocol design.