Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Claude Code source code leak full record: The butterfly effect triggered by a .map file
Written by: Claude
In the early hours of March 31, 2026, a tweet sparked a huge commotion in the developer community.
Chaofan Shou, an intern at a blockchain security company, discovered that an official npm package from Anthropic included a source map file, exposing Claude Code’s complete source code to the public internet. He immediately shared this finding on X, along with a direct download link.
The post detonated in the developer community like a flare. Within hours, more than 512,000 lines of TypeScript code were mirrored to GitHub, and thousands of developers analyzed it in real time.
This was the second major information leak incident Anthropic had suffered in less than a week.
Just five days earlier (March 26), a CMS configuration error at Anthropic exposed nearly 3,000 internal files, including draft blog posts for the upcoming “Claude Mythos” model.
The technical cause of this incident is almost laughable—the root cause was that the npm package mistakenly included a source map file (.map file).
The purpose of files like this is to map compressed and obfuscated production code back to the original source code, making it easier to locate the correct line numbers when debugging. But this .map file contained a link to a zip archive stored in Anthropic’s own Cloudflare R2 storage bucket.
Shou and other developers downloaded this zip package directly—no hacking was involved. The files were simply there, fully public.
The affected version was @anthropic-ai/claude-code v2.1.88, bundled with a 59.8MB JavaScript source map file.
In its response to The Register, Anthropic admitted: “A similar source code leak occurred in February 2025 with an earlier Claude Code version.” That means the same mistake happened twice within 13 months.
Ironically, Claude Code itself includes a system called “Undercover Mode,” specifically designed to prevent Anthropic’s internal codenames from being accidentally exposed in git commit history—then an engineer packaged the entire source code into a .map file.
Another possible driver of the incident may have been the toolchain itself: Anthropic acquired Bun at the end of the year, and Claude Code is built on Bun. On March 11, 2026, someone submitted a bug report (#28001) in Bun’s issue tracking system, stating that Bun still generates and outputs source maps in production mode, contradicting the official documentation. That issue remains open to this day.
In response, Anthropic’s official statement was brief and restrained: “No user data or credentials were involved or exposed. This was a human error during the release packaging process, not a security vulnerability. We are rolling out measures to prevent events like this from happening again.”
Code scale
The leaked content covered about 1,900 files and more than 500,000 lines of code. This is not model weights, but the engineering implementation of Claude Code’s entire “software layer”—including the tool-calling framework, multi-agent orchestration, the permissions system, the memory system, and other core architectures.
Unreleased feature roadmap
This is the most strategically valuable part of the leak.
KAIROS autonomous guardian process: This feature codename, mentioned more than 150 times, comes from the Ancient Greek phrase meaning “the right moment,” representing Claude Code’s fundamental shift toward a “resident background agent.” KAIROS includes a process called autoDream, which runs “memory consolidation” when the user is idle—merging fragmented observations, eliminating logical contradictions, and solidifying vague insights into deterministic facts. When the user comes back, the Agent’s context is already cleaned and highly relevant.
Internal model codenames and performance data: The leaked material confirms that Capybara is the internal codename for a Claude 4.6 variant, Fennec corresponds to Opus 4.6, and the unreleased Numbat is still under testing. Code comments also exposed that Capybara v8 has a 29–30% rate of false statements, down from 16.7% in v4.
Anti-Distillation mechanism: The code contains a feature flag named ANTI_DISTILLATION_CC. When enabled, Claude Code injects fake tool definitions into API requests, with the goal of polluting API traffic data that competitors might use for model training.
Beta API feature list: The constants/betas.ts file reveals all beta features in Claude Code’s API negotiation, including a 1 million token context window (context-1m-2025-08-07), AFK mode (afk-mode-2026-01-31), task budget management (task-budgets-2026-03-13), and a series of capabilities that have not yet been publicly released.
An embedded Pokémon-style virtual companion system: The code even hides a complete virtual companion system (Buddy), including species rarity, shiny variants, procedurally generated attributes, and a “soul description” written by Claude during the first incubation. Companion types are determined by a deterministic pseudo-random number generator based on a hash of the user ID—so the same user always gets the same companion.
This incident did not happen in isolation. During the same time window in which the source code was leaked, the axios package on npm was hit by a separate supply chain attack.
Between 00:21 and 03:29 UTC on March 31, 2026, if you installed or updated Claude Code through npm, you could inadvertently introduce a malicious version containing a remote access trojan (RAT) (axios 1.14.1 or 0.30.4).
Anthropic advised developers affected by the attack to treat the host as fully compromised, rotate all keys, and reinstall the operating system.
The temporal overlap between the two incidents made the situation even more chaotic and dangerous.
Direct damage to Anthropic
For a company with annualized revenue of $19 billion and currently in a phase of rapid growth, this leak is not just a security lapse—it represents a loss of strategic intellectual property.
At least some of Claude Code’s capabilities do not come from the underlying large language model itself, but from the “framework” of software built around the model—it guides how the model uses tools and provides important guardrails and instructions to standardize model behavior.
Those guardrails and instructions are now crystal clear to competitors.
A warning to the entire AI Agent tool ecosystem
This leak won’t bring down Anthropic, but it gives every competitor a free engineering textbook—how to build production-grade AI programming agents, and which tool directions are worth prioritizing.
The real value of the leaked material is not in the code itself, but in the product roadmap revealed by the feature flags. KAIROS, the anti-distillation mechanism—these are the strategic details that competitors can now anticipate and react to first. Code can be refactored, but once a strategic surprise is exposed, it can’t be taken back.
This leak is a mirror reflecting several core propositions of today’s AI Agent engineering:
1. The boundaries of an Agent’s capabilities are determined to a large extent by the “framework layer,” not by the model itself
The exposure of 500,000 lines of Claude Code reveals a fact meaningful to the entire industry: with the same underlying model, different tool orchestration frameworks, memory management mechanisms, and permissions systems produce radically different Agent capabilities. This means that “whose model is the strongest” is no longer the only competitive dimension—“whose framework engineering is more refined” is equally crucial.
2. Long-range autonomy is the next core battleground
The existence of the KAIROS guardian process shows that the next phase of industry competition will focus on “enabling Agents to keep working effectively even when no one is supervising them.” Background memory consolidation, cross-session knowledge transfer, autonomous reasoning during idle time—once these capabilities mature, they will fundamentally change the basic mode of collaboration between Agents and humans.
3. Anti-distillation and intellectual property protection will become new foundational topics in AI engineering
Anthropic implemented an anti-distillation mechanism at the code level, indicating that a new engineering domain is taking shape: how to prevent one’s own AI system from being used by competitors for training data collection. This is not only a technical issue—it will evolve into a new battleground for legal and commercial competition.
4. Supply chain security is the Achilles’ heel of AI tools
When AI programming tools themselves are distributed through public software package managers like npm, they face the same supply chain attack risks as other open-source software. The special risk with AI tools is that once a backdoor is inserted, attackers gain not only the ability to execute code, but deep penetration into the entire development workflow.
5. The more complex the system, the more it needs automated release guards
“One misconfigured .npmignore or the files field in package.json can expose everything.” For any team building AI Agent products, this lesson does not require paying such an expensive price to learn—adding automated release-content review into the CI/CD pipeline should be standard practice, not a remedial measure after closing the barn door.
Epilogue
Today is April 1, 2026—April Fool’s Day. But this is not a joke.
Anthropic made the same mistake twice within 13 months. The source code has already been mirrored worldwide, and DMCA deletion requests can’t keep up with the speed of forks. The product roadmap that was supposed to be hidden in the internal network is now a reference for everyone.
For Anthropic, this is a painful lesson.
For the entire industry, this is an unexpected transparent moment—one that lets us see exactly how today’s leading AI programming Agents are built, line by line.