On Monday, March 31, 2026, the tech community was surprised by one of the biggest intellectual-property leaks of the year: the full source code of Claude Code, Anthropic's AI-assisted programming tool, was publicly exposed on the npm registry. It was 512 thousand lines of TypeScript spread across about 1,900 files, accessible to anyone who knew where to look.
For companies that depend on CI/CD pipelines, package supply chains and AI tooling in their development flows, this incident is a red alert that can't be ignored.
What happened?
Security researcher Chaofan Shou was the first to publicly flag the problem. Version 2.1.88 of the official Claude Code npm package was published with a 59.8 MB source map file that had no business being there. Source maps are debugging files that map compiled code back to the original source — and in this case, the file contained the tool's full source code.
The cause? Claude Code is packaged with Bun, the JavaScript runtime Anthropic acquired. Bun generates source maps during build by default. Someone on the team forgot to add *.map to .npmignore or to disable source map generation in production builds. A simple human error, but with huge consequences.
What was found in the code
The community didn't waste time. Within hours the code had been mirrored across GitHub repos — some racking up more than 41,500 forks before Anthropic started sending DMCA takedown notices.
Among the most notable findings:
- A three-layer memory architecture: a sophisticated system with indexed MEMORY.md files, on-demand topic files and searchable session transcripts.
- 44 feature flags: functionality that's ready but not shipped, hidden behind flags that compile to
falsein public builds. - KAIROS mode: a persistent background assistant that observes, logs and acts proactively, keeping daily logs with a 15-second budget for autonomous actions.
- Undercover Mode: a mode that instructs Claude Code to hide the fact that it's an AI when Anthropic employees use it in public open-source repositories.
- A five-level permission system controlling access to different operations and tools.
- Fork-join subagent model: task parallelization using KV cache to optimize performance.
Anthropic's response
Anthropic confirmed the incident quickly, stating:
"This was a release packaging issue caused by human error, not a security flaw. No sensitive customer data or credentials were involved or exposed. We're implementing measures to prevent this from happening again."
The company sent DMCA notices to GitHub to take down copies of the code, but later acknowledged that the action impacted more repositories than intended and significantly narrowed the scope of the takedowns.
The real risk: supply-chain attacks
Beyond the leak itself, the incident created an active, immediate security risk. Attackers started exploiting the situation in two ways:
- Malicious npm packages: packages like
color-diff-napiwere registered, mimicking Claude Code's internal dependencies, targeting developers who tried to compile the leaked code. - Dependency confusion attacks: attackers registered Anthropic internal package names on public npm, trying to intercept automatic installs in CI/CD pipelines.
- Possible trojanization: reports suggest that users who installed or updated Claude Code via npm on March 31, 2026 between 00:21 and 03:29 UTC may have pulled a compromised version containing a remote access trojan.
These attack vectors are particularly dangerous because they exploit the implicit trust that automated pipelines place in public package registries.
Why this matters for your company
If you use AI tools in your development flow — and most tech companies already do — this incident raises critical questions:
- Do you audit the packages that enter your pipeline? Dependency confusion and typosquatting are real, growing threats.
- Are your production builds clean? Source maps, debug tokens, environment variables — how many development artifacts are leaking into production without you knowing?
- Is your software supply chain secure? Tools like lockfiles, package signatures and private registries are essential to protect your pipeline.
- Do you have visibility into what your AI tools do? The leak revealed that Claude Code has hidden modes, undocumented flags and behaviors that aren't in the public docs.
If Anthropic — a company valued in billions of dollars, with a stated focus on AI security — made this kind of mistake, your company is vulnerable too.
What to do now
We recommend the following immediate actions:
- Audit your dependencies: check whether any suspicious package was installed recently in your projects, especially if you use Claude Code.
- Rotate credentials: if you updated Claude Code via npm on March 31, rotate every token, API key and access credential as a precaution.
- Enforce integrity verification: use strict lockfiles (
pnpm-lock.yaml,package-lock.json), enablenpm auditin CI and consider tools like Snyk or Socket.dev. - Review your
.npmignoreand.dockerignore: make sure source maps, debug files and credentials aren't being included in production builds. - Adopt private registries: for internal dependencies, use private registries (Artifactory, Verdaccio, GitHub Packages) to prevent dependency confusion.
- Monitor continuously: set up alerts for new packages or unexpected versions in your dependencies.
How CloudScript can help
At CloudScript, pipeline security isn't an extra — it's a core part of our DevOps, SRE and Platform Engineering work. We help companies:
- Build secure CI/CD pipelines with dependency integrity checks at every stage.
- Implement supply-chain policies that prevent attacks like dependency confusion and typosquatting.
- Configure private package registries and caching strategies that isolate your production environment.
- Run security audits on existing pipelines, catching artifact leaks and insecure configurations.
- Set up observability and alerts to detect anomalies in dependency and tooling behavior.
The Claude Code leak is a reminder that no company is immune to operational mistakes. The difference between an incident and a catastrophe is preparation. Your business could be at risk right now — and you might not know it.
Talk to CloudScript and make sure your pipelines are protected before the next incident hits your operation directly.
References
- CNBC — Anthropic leaks part of Claude Code's internal source code
- The Hacker News — Claude Code Source Leaked via npm Packaging Error
- BleepingComputer — Claude Code source code accidentally leaked in NPM package
- Cybernews — Full source code for Anthropic's Claude Code leaks
- Latent Space — AI News: The Claude Code Source Leak