How Claude Code Actually Works (It's Simpler Than You Think)

As a developer who uses Claude Code daily, what strikes me about this leak isn't the leak itself — it's that roughly 90% of this 512,000-line codebase is defensive programming. It's essentially trying to make a probabilistic LLM behave deterministically.
There are also unreleased features like Kairos that hint at where AI coding tools are heading. Not better autocomplete — autonomous agents that dream about your code 24/7.
How It Happened
Apparently one of the developers included a .map file — a source map used to deobfuscate production code. When code ships to production it's obfuscated and minimized, making reverse engineering significantly harder. But with a source map file? Wide open.
I don't know exactly how Anthropic's deployment process works, but this map file was probably included by a developer during the npm publish step, or it was checked into Git and made its way into the CI/CD pipeline.
Chaos ensued.
The Clean Room Rewrites
I'm not going to show the actual source code — Anthropic would DMCA strike me into oblivion. But plenty of people have already rewritten the leaked code. For example, Claurst is a repository where Claude Code's behaviors are reimplemented in Rust using a clean room approach — written in an air-gapped environment, never directly referencing the original source.
There are even SaaS solutions now for "clean room rewriting" open source repos, where the system recreates functionality from documentation and API specs without ever seeing the original code. It's not illegal to take someone's idea and reimplement it — you just can't carbon-copy the code.
As soon as the leak dropped, people started churning out implementations in Rust, Python, you name it. And you can bet competitors have already snatched ideas to improve their own tools.
What the Code Actually Reveals
Here's what we learned about Claude Code's architecture:
Plugin-like tool system. There's a base tool definition — allegedly 29,000 lines of TypeScript — with around 40+ tools built on top of it.
A query engine. This handles all LLM API calls, response streaming, caching, and orchestration. It's the largest single module in the codebase, which makes sense — it's the brain of the operation.
Multi-agent orchestration. Claude Code can spawn sub-agents (internally called "swarms") for complex, parallelizable tasks. I've used this myself — asking Claude to kick off 15 Explorer agents simultaneously, each investigating a different part of a codebase, then reporting findings back to the main agent.
Persistent memory system. Everything about how it remembers context across sessions was in there.
The 11-Step Agent Loop
Someone quickly built ccunpacked.dev — a visualization tool walking through Claude Code's agent loop. Here are the 11 steps:
- Input — You type your prompt
- Create user message — Wraps your text into Anthropic's message format (standard practice)
- Append to history — Message gets pushed onto the in-memory conversation array
- Assemble system prompt — Pulls in CLAUDE.md, tool definitions, context, memory
- API streaming — Sends everything to the API, streams back the response
- Token parsing — Parses tokens as they arrive, renders live to terminal
- Tool detection — Identifies when the model wants to use a tool (Bash, Read, Edit, etc.)
- Execution loop — Collects tool results, appends to history, calls the API again. Keeps iterating until the task is complete
- Response rendering — Renders the final response as Markdown in the terminal
- Post-sampling hooks — Auto-compact if conversation is too long, extract memories, run dream mode (not yet enabled — this is the Kairos system)
- Await next input — Back to the read-eval-print loop
The Post-Sampling Hooks Are the Interesting Part
Just like LLMs can only get so good given the same data and processing power, you can only teach them so much. The real improvements come from better tooling around them.
Auto-compacting, persistent memory, dream mode — these are the features that help Claude Code recall past conversations and keep context in check across sessions. Kairos (dream mode) isn't released yet, but it represents a shift: AI coding tools that continue working and thinking even when you're not actively prompting them.
What Happens Now?
At the end of the day, Claude Code isn't magic. There's an LLM on the other side — the difference is in how it's interfaced with, the tool architecture, the memory system, the agent loop.
I'm not sure how Anthropic reacts to this. The logical next step seems like open-sourcing it. Unless they plan to completely rewrite the internals and change everything about how Claude Code works, everyone already knows the secret sauce.
Maybe people just don't care. It's not like we're all going to go write our own Claude Codes, right?
I guess we'll see.