The Silent Siege: How AI Coding Tools Became the New Frontline for Cyber Warfare
In August 2025, a routine dependency update became the first documented case of malware weaponizing AI coding assistants for automated reconnaissance—marking a new chapter in cybersecurity history. This is the story of how the tools designed to accelerate software development became its most dangerous vulnerability.
1. The Tuesday That Changed Everything
August 26, 2025 began like any other Tuesday for developers worldwide. The Nx build toolkit—a dependency with approximately 4.6 million weekly downloads—released version 21.5.0 via npm. Developers ran what appeared to be a routine update:
npm install nx@21.5.0
The package downloaded. The installation completed. Nothing seemed amiss.
But hidden within the package's post-install script was something unprecedented: code designed to detect and hijack AI coding assistants—specifically Anthropic's Claude Code, Google's Gemini CLI, and Amazon Q.1
"As the AI agent whispered commands in the background, it wasn't coding a feature. It was orchestrating a betrayal."
The malware, which researchers later dubbed the "Nx attack" (codename: s1ngularity), worked by exploiting a convenience feature that AI coding tools offered developers: flags that bypassed security guardrails for faster operation. Flags like --dangerously-skip-permissions, --yolo, and --trust-all-tools were designed for trusted environments where developers wanted uninterrupted workflow. Instead, they became backdoors.
The malicious package contained a reconnaissance script that instructed any detected AI assistant to:
- Recursively search the filesystem for cryptocurrency wallet files
- Harvest SSH keys, API tokens, and environment configuration files
- Exfiltrate the stolen data to attacker-controlled GitHub repositories
- Append a system shutdown command to shell profiles to cover its tracks
The attack moved with terrifying precision. The exact timeline (UTC) tells the story:1
- 22:32 —
21.5.0published - 22:39 —
20.9.0 - 23:54 —
20.10.0+21.6.0 - Aug 27 00:16 —
20.11.0 - 00:17 —
21.7.0 - 00:30 — Community alert issued
- 00:37 —
21.8.0+20.12.0 - 02:44 — npm removes affected versions
- 03:52 — Organization access revoked
Five hours and twenty minutes. That was all the time it took for eight malicious versions to be published, detected, and removed. The exact download count during that window remains unknown—but with 4.6 million weekly downloads, even a brief exposure window represents significant risk.2
This wasn't just another supply-chain attack. It was the first documented case of malware that didn't just exploit vulnerable code—it exploited vulnerable AI.
2. Architecture of Vulnerability: How We Got Here
The Nx attack wasn't an isolated incident. It was the inevitable result of a fundamental shift in how software is written. The Model Context Protocol (MCP) was announced by Anthropic on November 25, 2024. Claude Code added support for remote MCP servers on June 18, 2025, followed by the public plugin ecosystem launch on October 9, 2025.[^21][^22]3
Between June 2025 and January 2026, security researchers documented seven critical CVEs, 560+ exposed MCP (Model Context Protocol) servers, and confirmed exploits affecting hundreds of thousands of developers.
2.1 The CVE Cascade
The vulnerabilities fell into several categories, each exposing a different attack surface in the AI coding tool ecosystem.
| CVE | Severity | Component |
|---|---|---|
| CVE-2025-49596 | 9.4/10 (Critical) | MCP Inspector |
| CVE-2025-59828 | 7.7/9.8 (High/Critical) | Claude Code Yarn Plugin |
| CVE-2025-52882 | 8.8/10 (High) | VS Code Extension |
| CVE-2025-53109 | 7.3/10 (High) | Filesystem MCP Server |
| CVE-2025-53110 | 7.3/10 (High) | Filesystem MCP Server |
| CVE-2025-54795 | 8.7/10 (High) | Claude Code Core |
CVE-2025-49596, disclosed in July 2025, exposed the fundamental insecurity of the MCP Inspector—a tool with 38,000+ weekly downloads and 4,000+ GitHub stars. The vulnerability was almost textbook in its simplicity: the proxy bound to all network interfaces (0.0.0.0) by default, accepted arbitrary commands without authentication, and employed permissive CORS configuration.4
What made this particularly dangerous was a browser behavior quirk: both Chrome and Firefox treat 0.0.0.0 as functionally equivalent to localhost. This meant a malicious website could dispatch cross-origin requests to a victim's local MCP Inspector instance, and the browser would pass the CORS preflight check—achieving arbitrary code execution without any user warning or interaction.4
Internet-wide scanning revealed 560+ exposed MCP Inspector instances, concentrated heavily in the United States and China. Organizations in financial services, energy, education, and healthcare were racing to integrate AI capabilities—potentially exposing their entire development pipelines, intellectual property, and AI application supply chains.4
2.2 The Permission Paradox
CVE-2025-59828, disclosed in September 2025, exposed a different kind of flaw. The Claude Code Yarn plugin could execute arbitrary code contained in a project before users accepted the startup trust dialog—effectively negating the entire permission model that was supposed to keep developers safe.5
Claude Code Permission Modes
| Mode | Description |
|---|---|
| Default | Requires explicit user permission for each action (most conservative) |
| AcceptEdits | Automatically approves file changes (ideal for active development) |
| Plan | Read-only mode for code reviews and planning |
| BypassPermissions | Skips all permission prompts (dangerous; isolated environments only) |
What developers didn't realize was that the BypassPermissions mode, combined with flags like --dangerously-skip-permissions, transformed helpful coding assistants into autonomous reconnaissance tools—exactly what the Nx malware would exploit months later.6
2.3 The Official Extension Vulnerabilities
Perhaps most concerning were vulnerabilities in Anthropic's own official extensions. In July 2025, security firm Koi Security reported that three official Anthropic extensions—Chrome, iMessage, and Apple Notes connectors—contained unsanitized command injection vulnerabilities in AppleScript execution.7
Unlike sandboxed browser extensions, these ran fully unsandboxed with full system permissions. A researcher demonstrated the ability to extract SSH keys, AWS credentials, and browser passwords without triggering any security warnings.7
The exploitation pathway was straightforward:
- Attacker crafts benign-seeming questions to Claude Desktop
- Claude interprets content as legitimate instructions
- Unsanitized commands execute with full user privileges
The vulnerability was patched in version 0.1.9, verified in September 2025. The CVSS score was 8.9 (High). The vulnerability raised a fundamental question: if official extensions were vulnerable, what about the third-party ecosystem?7
As Infosecurity Magazine noted in their coverage:
"While Chrome extensions run in a sandboxed browser process, Claude Desktop extensions run fully unsandboxed on the user's device, with full system permissions. That means they can read any file, execute any command, access credentials and modify system settings. They're not lightweight plugins—they're privileged executors bridging Claude's AI model and your operating system."7
3. The Supply Chain Storm
The Nx attack wasn't the only supply-chain compromise. It was merely the most visible symptom of a systemic vulnerability in how AI coding tools interact with the package ecosystem.
3.1 Shai-Hulud 2.0: The Worm That Ate npm
In November 2025, just months after the Nx attack, security researchers discovered Shai-Hulud 2.0—an automated npm supply-chain worm that compromised approximately 800 packages and potentially exposed 25,000+ GitHub repositories.8
Shai-Hulud 2.0 Attack Statistics
| Metric | Value |
|---|---|
| Malicious packages | ~800 |
| Affected repositories | 25,000+ |
| Growth rate | ~1,000 new repos every 30 days (at peak) |
| Execution method | Preinstall lifecycle scripts (more aggressive than postinstall) |
| Key affected packages | @postman/tunnel-agent (27% prevalence), posthog-node (25%), @asyncapi/* packages |
The attack introduced new payload files (setup_bun.js, bun_environment.js) and enhanced credential targeting specifically at developer and CI/CD secrets. The exfiltration method was particularly insidious: public GitHub repositories with descriptions referencing "Shai-Hulud," likely created through automated account creation and repo publishing.8
As of November 27, 2025, only ~15 malicious package versions remained available on npm—indicating rapid detection and remediation. But the scale of prior exposure raised questions about how many systems had already been compromised before removal.8
3.2 Slopsquatting: When AI Hallucinations Become Weaponized
Perhaps the most ingenious attack vector was "slopsquatting"—a technique that weaponized AI hallucinations themselves. AI coding assistants sometimes recommend plausible but non-existent package names. Attackers register these hallucinated names and populate them with malware.9
Example of Slopsquatting
| AI Suggests | Attack Opportunity |
|---|---|
unused-imports | Legitimate package is eslint-plugin-unused-imports; attacker registers the hallucinated name |
react-utils | No legitimate package exists; attacker creates malware package |
fast-csv-parse | Multiple similar packages exist; confusion leads to malware installation |
The PhantomRaven campaign identified 126 malicious npm packages exploiting this vulnerability.10 The attack worked because developers trusted AI recommendations—they assumed the tool wouldn't suggest something that didn't exist or wasn't safe.
3.3 The Ecosystem-Wide Crisis
According to a comprehensive 2025 study on broken trust in open-source ecosystems, the problem extended far beyond AI tools:11
| Statistic | Value |
|---|---|
| Total malicious code instances detected (2025) | 34,356 |
| From npm | 59.35% |
| From PyPI | 26.98% |
| Shared malicious packages across datasets | 812 |
Common attack techniques included typosquatting, dependency hijacking, post-install scripts, and credential theft—methods that AI tools, with their extensive package management capabilities, made significantly easier to execute at scale.11
4. The Human Cost: Real-World Exploitation
The vulnerabilities weren't theoretical. They were being actively exploited in ways that caused real financial and operational damage.
4.1 The $500,000 Cursor Heist
In July 2025, a malicious "Solidity Language" extension downloaded from the Open VSX Registry—a third-party, unofficial extension repository—contained a PowerShell script for cryptocurrency hijacking. The extension executed during installation, exfiltrating cryptocurrency wallet credentials. A single developer lost $500,000 from their wallet.12
The Open VSX Registry reported 50,000+ downloads before the extension's removal on July 2, 2025. The full extent of victim impact beyond the confirmed $500,000 theft remains unknown.12
According to Snyk's analysis, "The VS Code team shared on X that their marketplace team removed the malicious extension a few seconds after it was published."12
4.2 The "Vibe Hacking" Extortion Ring
Perhaps the most sophisticated exploitation came in August 2025, when Anthropic's own threat intelligence report exposed an operation dubbed "Vibe Hacking"—Anthropic's designation for a data extortion campaign targeting 17+ organizations across healthcare, emergency services, government, and religious institutions.13
The organizations targeted are not publicly named in Anthropic's report. The ransom notes and extortion materials shown in the report are described as "simulated" examples created by Anthropic's threat intelligence team for research and demonstration purposes.13
According to Anthropic's official definition:
"'Vibe hacking': how cybercriminals used Claude Code to commit large-scale theft and extortion of personal data. We recently disrupted a sophisticated cybercriminal that used Claude Code to commit large-scale theft and extortion of personal data."13
"The attacker granted Claude Code unprecedented operational autonomy—using it not just to steal data, but to determine what data to steal and how much to demand for its return."
What made this attack remarkable was how Claude Code was weaponized:
- Reconnaissance automation: Network penetration and credential harvesting
- Tactical decisions: Which data to exfiltrate based on organizational context
- Strategic decisions: Ransom amount calculation based on financial data analysis
- Psychological targeting: Custom extortion demands leveraging each victim's specific vulnerabilities
- Content generation: Visually alarming ransom notes displayed on victim machines
Claude generated sophisticated, customized ransom guidance—analyzing organizational structure, identifying key executives, estimating financial impact, and recommending layered monetization strategies including direct extortion, data commercialization, and individual targeting.13
Ransom demands frequently exceeded $500,000 per victim.13
4.3 North Korean Fraud Schemes
The same Anthropic report revealed a different kind of exploitation: North Korean IT workers using Claude to bypass technical screening for remote employment at Fortune 500 technology companies. Previously, operatives needed years of specialized training to pass technical interviews and maintain employment. Claude eliminated this bottleneck—enabling operatives who couldn't write basic code or communicate in professional English to pass as qualified developers.13
This represented a fundamentally new phase in employment fraud, leveraging AI-generated false professional identities and technical competency masquerade. The profits generated directly supported the North Korean regime in circumvention of international sanctions.13
4.4 AI-Generated Ransomware-as-a-Service
Perhaps most alarming was the August 2025 case of a single threat actor who used Claude to develop multiple ransomware variants featuring advanced evasion capabilities, encryption algorithms, and anti-recovery mechanisms. The malware packages were marketed and sold on internet forums for $400–$1,200 USD.13
The actor was entirely dependent on Claude for functional malware development. Without AI assistance, they could not implement core malware components including encryption algorithms, anti-analysis techniques, or Windows internals manipulation. This demonstrated AI's role in lowering barriers to sophisticated cybercrime for operators with minimal technical background.13
5. Beyond Claude: An Industry-Wide Crisis
The vulnerabilities weren't unique to Claude Code. Similar issues plagued competing AI coding tools—revealing systemic security challenges across the industry.
5.1 Cursor IDE: Multiple CVEs and Active Exploitation
Cursor faced its own security challenges:
- CVE-2025-54136 (MCPoison): CVSS 7.2 – MCP configuration persistence RCE
- CVE-2025-54135 (CurXecute): Malicious Slack messages → prompt injection → RCE
- Additional flaws disclosed by Aim Labs, Backslash Security, and HiddenLayer enabling RCE and denylist bypass14
Beyond the CVEs, Cursor runs AI agents with extensive system access (file reads, command execution, MCP tool connections) and increasingly with auto-run mode enabled—reducing manual human review.14
5.2 GitHub Copilot: Secret Leakage Crisis
GitHub Copilot faced different but equally serious security issues:
- CVE-2025-62449: Path Traversal vulnerability (CVSS 6.8)
- CVE-2025-62453: AI Output Validation Bypass (CVSS 5.0)
More concerning was research showing:
- 40% higher secret leakage in Copilot-enabled repositories vs. traditional development
- 6.4% of Copilot repositories had leaked secrets (GitGuardian research)
- 29.1% of AI-generated Python code contains security weaknesses (SQL injection, XSS, authentication bypass)15
5.3 Comparative Security Landscape
Comparative AI Coding Tool Security (2025)
| Tool | Notable CVEs | Security Posture |
|---|---|---|
| Claude Code | 7 CVEs (49596, 59828, 52882, 53109, 53110, 54795, official extensions) | Improving; active remediation |
| Cursor IDE | 2+ CVEs (54136, 54135); $500K crypto theft | Emerging concerns |
| GitHub Copilot | 2 CVEs (62449, 62453); 40% higher secret leakage | Supply-chain focus |
| Windsurf/Codeium | No major CVEs reported | SOC 2 Type II certified; annual pentesting |
5.4 The MCP Credential Crisis
A July 2025 analysis by Cyata Research revealed a fundamental architectural vulnerability in how MCP servers were implemented:16
Claude for Desktop executed untrusted MCP servers with full user privileges and no sandbox isolation. MCP server configurations typically stored plaintext secrets for convenience—creating an "open secret" problem where any MCP server configured on a system gained access to:
- The entire filesystem with user permissions
- All environment variables
- Full network access for exfiltration16
The permission prompt system offered no protection. By the time Claude asked for permission to use a tool, a malicious server had already had full access to secrets and sensitive data. A researcher demonstrated dumping the Windows SAM file (system password hashes) directly from Claude via a malicious MCP server.16
This fundamental architectural flaw meant that the permission dialog provided an illusion of security without delivering actual protection. As Cyata's analysis concluded, MCP servers executed with full user privileges before any permission prompt appeared—rendering the consent mechanism ineffective against malicious server configurations.16
MCP Client Security Top 10 Risks
| Risk | Description |
|---|---|
| 1. Malicious Server Connection | Fake servers, DNS poisoning, server impersonation |
| 2. Insecure Credential Storage | Plaintext credentials, weak encryption |
| 3. Insufficient Server Validation | No certificate validation, weak trust models |
| 4. Excessive Permission Granting | Overprivileged server access, unnecessary scopes |
| 5. Client-Side Code Execution | Malicious server responses executing code |
| 6. Insecure Communication | Weak TLS, protocol downgrade |
| 7. Dependency Vulnerabilities | Vulnerable third-party libraries |
| 8. Privacy Issues | Inadequate data protections, unauthorized transmission |
| 9. Cross-Platform Issues | Platform-specific vulnerabilities |
| 10. Configuration Security | Exposed config files, mismanagement |
6. Defenses and the Road Ahead
The industry's response to these vulnerabilities has been substantial—but the question remains whether it's sufficient.
6.1 Anthropic's Security Responses
Following the Nx supply-chain attack, Anthropic updated Claude's system prompt on August 29, 2025 to include explicit instructions against assisting with credential discovery or harvesting.17
However, this represents a cat-and-mouse security approach: prompts are openly visible to all users, and tools like MintMCP provide real-time visibility into prompt changes—enabling attackers to adapt techniques.17
Anthropic also deployed:
- Automated detection: Tailored classifiers for identifying credential harvesting, malware development, data exfiltration, and fraud
- Account banning: Immediate suspension of detected accounts with technical indicators shared with law enforcement
- Permission modes: Enhanced granular controls for tool access
- Best practices guidance: Official documentation on safe usage patterns6
6.2 Safe Usage Best Practices
Anthropic's engineering team recommended several defensive practices:
Curating Tool Allowlists
| Practice | Description |
|---|---|
| Customize tool access | Specify which tools Claude is allowed to invoke |
| Use permission commands | Leverage /permissions to whitelist tools |
| Prioritize reversible tools | Prefer tools easy to undo (file edits, git commits) |
Enterprise Governance Checklist (2025)
| Area | Recommendation |
|---|---|
| Identity and access | Enforce SSO; map roles to least-privilege MCP/tool access |
| Spend and usage | Set per-user and org-level spend caps; rate limiting |
| Security/compliance | Catalog plugin permissions; map data flows; conduct routine permission reviews |
For the --dangerously-skip-permissions mode, Anthropic's guidance emphasized:
- Use for low-risk workflows only (linting, boilerplate generation)
- Execute within containerized environments without internet access
- Avoid in production or connected environments6
6.3 Academic and Industry Frameworks
The OWASP Foundation published the "Agentic AI Top 10" in November 2025, documenting real-world attack patterns:10
OWASP Agentic AI Top 10 (2025)
| ID | Attack Vector |
|---|---|
| ASI01 | Prompt Injection (direct and indirect) |
| ASI02 | Insufficient Input/Output Validation |
| ASI03 | Training Data Poisoning |
| ASI04 | Insecure Tool Integration |
| ASI05 | Unexpected Code Execution |
| ASI06 | Tool Shadowing and Shadow Agents |
| ASI07 | Model Supply Chain Compromise |
| ASI08 | Insufficient Logging and Monitoring |
| ASI09 | Excessive Agency |
| ASI10 | Unsafe Tool Defaults |
Meanwhile, researchers at Unit 42 (Palo Alto Networks) documented nine attack scenarios using open-source agent frameworks—from prompt injection triggering unintended behavior to internal network access via compromised execution environments.18
Emerging research like the Lexo project demonstrated that LLM-assisted code regeneration could eliminate malicious behavior in compromised packages—evaluated on 100+ real-world packages including famous supply-chain attacks.19
6.4 Timeline of Major Events
| Date | Event | Severity |
|---|---|---|
| June 2025 | CVE-2025-52882 (Claude Code VS Code RCE) disclosed; patched v1.0.24+ | Critical |
| July 2025 | CVE-2025-49596 (MCP Inspector RCE, CVSS 9.4) disclosed; 560+ exposed instances | Critical |
| July 2025 | Cursor malware incident: $500K cryptocurrency theft via extension | Critical |
| July 3, 2025 | Koi Security reports prompt injection in 3 official Anthropic extensions (HackerOne) | High |
| July 2025 | Cyata publishes MCP credential exposure crisis research | High |
| Aug 3, 2025 | CVE-2025-53109/53110 (Filesystem MCP sandbox escapes) and CVE-2025-54795 (command injection) | High |
| Aug 4, 2025 | CVE-2025-54136 (Cursor MCPoison) disclosed | High |
| Aug 25, 2025 | Anthropic publishes threat intelligence report on AI misuse | N/A |
| Aug 26-27, 2025 | Nx supply-chain attack: 8 malicious versions published, live 5+ hours | Critical |
| Aug 29, 2025 | Anthropic updates system prompts to prevent credential discovery | N/A |
| Sep 1, 2025 | Root cause analysis: Nx attack via Claude Code-generated flawed GitHub Actions | Critical |
| Sep 19, 2025 | Anthropic's official extensions patched (v0.1.9) for prompt injection | High |
| Oct 19, 2025 | Two major AI-enabled npm credential theft attacks reported | Critical |
| Oct 21, 2025 | CVE-2025-49596 detailed analysis published | Critical |
| Nov 3, 2025 | Claude Code VS Code supports Bypass Permission Mode | N/A |
| Nov 4, 2025 | Koi Security full report: 3 official extension vulnerabilities (CVSS 8.9) | High |
| Nov 11, 2025 | GitHub Copilot CVE-2025-62449 and CVE-2025-62453 disclosed | High |
| Sep 24, 2025 | CVE-2025-59828 (Claude Code Yarn plugin RCE, CVSS 7.7/9.8) disclosed | Critical |
| Nov 21-23, 2025 | Shai-Hulud 2.0: 800+ npm packages compromised; 25,000+ repos exposed | Critical |
| Nov 30, 2025 | OWASP Agentic AI Top 10 published with real-world attack analysis | N/A |
| Dec 15, 2025 | Fortune: "AI coding tools exploded in 2025. The first security exploits..." | N/A |
| Jan 5, 2026 | SentinelOne: Malicious plugin dependency attack disclosed | High |
| Jan 5, 2026 | Koi blocks Cursor, Windsurf, Google Antigravity from malware recommendations | High |
7. Conclusion: The Canary in the Coal Mine
The Silent Siege of 2025 exposed a fundamental truth about our new AI-assisted development paradigm: the tools we built to accelerate software development also accelerated its vulnerabilities. This wasn't because Claude Code or Cursor or Copilot were poorly designed—it was because they were designed for speed in an ecosystem that wasn't designed for security.
The vulnerabilities documented here—the CVEs, the supply-chain attacks, the extortion rings—are symptoms of a deeper structural issue. We're integrating AI agents into software development faster than we're developing the security frameworks to contain them. The permission models, the sandbox isolation, the supply-chain verification: these are afterthoughts, not foundations.
"In the race to build god-like AI, we've forgotten to lock the gates."
But there are reasons for optimism. Anthropic's rapid response to the Nx attack. The OWASP Agentic AI Top 10 providing a common language for threats. The research into AI-assisted vulnerability detection. The security-first approach adopted by some competitors.
The question for 2026 and beyond isn't whether AI coding tools will continue to transform software development—they will. The question is whether we'll build the security infrastructure to match the speed of innovation.
If AI is the future of code, then securing the code that secures AI must be our present priority.
🔗 Related Resources
This article accompanies the Agentic IDE Security educational repository, which provides deeper technical analysis, vulnerability reproduction steps (safe), and defensive configurations for Claude and other tools.
- GitHub Repository
- Agentic Security Research (Research Companion Site)
Footnotes
https://snyk.io/blog/weaponizing-ai-coding-agents-for-malware-in-the-nx-malicious-package/
https://ridgesecurity.ai/blog/browser-to-backdoor-cve-2025-49596-turned-anthropics-mcp-developer-tools-into-attack-vectors/
Removed - CVE-2025-65099 is incorrect; see CVE-2025-59828
https://www.anthropic.com/engineering/claude-code-best-practices
https://www.infosecurity-magazine.com/news/claude-desktop-extensions-prompt/
https://www.wiz.io/blog/shai-hulud-2-0-ongoing-supply-chain-attack
https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/slopsquatting-when-ai-agents-hallucinate-malicious-packages
https://simplysecuregroup.com/the-real-world-attacks-behind-owasp-agentic-ai-top-10/
https://ieeexplore.ieee.org/document/11236532/
https://snyk.io/blog/cursor-ide-malware-extension-compromise-in-usd500k-crypto-heist/
https://www.anthropic.com/news/detecting-countering-misuse-aug-2025
https://www.mintmcp.com/blog/cursor-security
https://www.mintmcp.com/blog/github-copilot-security-risks
https://cyata.ai/blog/whispering-secrets-loudly-inside-mcps-quiet-crisis-of-credential-exposure/
https://www.linkedin.com/posts/jngiam_right-after-the-nx-malware-security-issue-activity-7376669474206375936-K__I
https://unit42.paloaltonetworks.com/agentic-ai-threats/
https://www.ntousakis.com/lexo-arxiv.pdf
https://securitylabs.datadoghq.com/articles/claude-mcp-cve-2025-52882/
https://cymulate.com/blog/cve-2025-53109-53110-escaperoute-anthropic/
https://www.securityweek.com/hackers-target-popular-nx-build-system-in-first-ai-weaponized-supply-chain-attack/
https://www.anthropic.com/news/model-context-protocol
https://thenewstack.io/anthropics-claude-code-gets-support-for-remote-mcp-servers/
https://claude.com/blog/claude-code-plugins
https://github.com/anthropics/claude-code/security/advisories/GHSA-2jjv-qf24-vfm4