Claude Code Source Code Leak Anthropic: In a significant development, AI firm Anthropic has accidentally revealed sensitive details of its coding tool, Claude Code. The incident has raised concerns about security, strategy, and intellectual property. On March 31, 2026, version 2.1.88 of the @anthropic-ai/claude-code npm package reportedly included a 59.8 MB JavaScript source map file.
This file exposed key internal parts of the platform and allowed public access to a large section of its codebase, estimated at around 512,000 lines of TypeScript. Within hours, thousands of developers mirrored the ~512,000-line TypeScript codebase on GitHub, dissecting features and memory architecture previously known only to Anthropic engineers.
The leak is notable because Anthropic is known for strong security practices and strict development controls. However, the issue appears to have been caused by a simple packaging mistake, not a cyberattack. An Anthropic spokesperson said that no sensitive customer data or credentials were exposed. The company clarified that the issue was caused by a human error during the release process and was not a security breach. It also added that steps are being taken to prevent such incidents in the future. (Also Read:Meta launches AI-powered Ray-Ban smart glasses with prescription support and WhatsApp summaries; Check features, price and availability)
Cybersecurity experts say the incident shows that even top AI companies can make basic operational errors. It has also raised concerns about managing risks as AI systems become more advanced. Analysts believe the leak could impact the company’s reputation, especially as it is reportedly preparing for a $380 billion IPO.
Key features of Claude Code exposed in leak
Developers who studied the leaked code found that the system uses a three-layer memory design to keep the AI’s answers clear and accurate. This helps prevent problems where the AI gets confused or gives wrong answers during long conversations.
One important feature is called “Self-Healing Memory.” It uses a simple file, MEMORY.md, which works like a guide and points to different topic files instead of storing everything in one place. This keeps the system fast and organised. It also follows a rule called “Strict Write Discipline,” which means only correct and successful updates are saved.
Another key feature is KAIROS. It works in the background, even when the user is not active. It helps organise information, combine important details, and keep everything running smoothly. The system also uses small helper programs, called subagents, to handle routine tasks so that the main AI can work without interruption.
Claude Source Code Leaked: What it means for developers
The leaked details give rival developers a rare look into how the system works. The code includes over 2,500 lines of bash validation logic, multi-agent coordination methods, and detailed memory systems. This means competitors can build similar AI tools faster without spending as much time and money on research.
The leak also revealed internal model names such as Capybara, Fennec, and Numbat, along with their performance data. For example, Capybara v8 has a false claim rate of around 29–30%, compared to 16.7% in version 4. This kind of information helps other developers understand the strengths and limits of current AI systems, especially in areas like accuracy and decision-making.
How user can use Claude Code safely
Anthropic has advised users to stop using the npm version of Claude Code and switch to the native installer. The native version is safer as it updates automatically and avoids unstable dependencies. Users should uninstall version 2.1.88 right away and, if needed, use a safer version like 2.1.86.
The company also recommends checking API usage for any unusual activity and making sure local systems are free from possible security threats like RAT infections. While data stored in the cloud is still safe, local devices may be at higher risk because parts of Claude Code’s internal system are now publicly available.