Researchers Identify Over 30 Vulnerabilities in AI Coding Tools Risking Data Breaches
Recent research has unveiled over 30 security vulnerabilities in AI-powered integrated development environments (IDEs). These vulnerabilities, collectively termed “IDEsaster,” pose significant risks, including data breaches and remote code execution (RCE). The findings were reported by security researcher Ari Marzouk and are targeted at several popular IDEs and extensions.
Key Vulnerabilities in AI Coding Tools
The identified vulnerabilities affect a range of well-known IDEs, including:
- Cursor
- Windsurf
- Kiro.dev
- GitHub Copilot
- Zed.dev
- Roo Code
- Junie
- Cline
Out of these, 24 vulnerabilities were assigned CVE identifiers, indicating their severity and potential impact.
The Nature of IDEsaster
Marzouk highlighted a shocking revelation: all AI IDEs tested were vulnerable due to their disregard for the core software in their security model. This oversight has allowed attackers to exploit legitimate IDE features for malicious purposes. The vulnerabilities generally leverage three main vectors:
- Bypassing guardrails of large language models through prompt injections.
- Executing actions via AI agents without user interaction.
- Triggering legitimate IDE features to leak sensitive data or execute arbitrary commands.
These vulnerabilities contrast with earlier attack methods, which often involved modifying AI agent configurations to execute unintended actions.
Examples of Identified Attacks
Some notable vulnerabilities include:
- CVE-2025-49150 (Cursor): Exploiting prompt injection to read sensitive files through legitimate tools, resulting in data leakage.
- CVE-2025-58335 (JetBrains Junie): Editing IDE settings through prompt injection to enable code execution.
- CVE-2025-61590 (Cursor): Modifying workspace configuration to achieve unauthorized command execution.
Recommended Security Practices
To mitigate these risks, Marzouk offers several recommendations:
- Use AI IDEs with trusted files and projects only.
- Regularly monitor and verify trusted Model Context Protocol (MCP) servers.
- Review all input sources for potential threat vectors.
Developers are encouraged to apply the principle of least privilege when configuring AI tools and ensure robust security testing protocols are in place.
Wider Implications on AI Security
The findings coincide with additional vulnerabilities discovered in other coding tools, such as command injection flaws in OpenAI Codex CLI and indirect prompt injections in Google Antigravity. These incidents highlight the expanding attack surface due to the pervasive use of AI in development environments.
As AI technologies gain momentum in enterprise applications, the need for a “Secure for AI” framework is becoming increasingly urgent. This approach is essential for ensuring that AI systems are both secure by default and designed with an understanding of how they can be exploited.
As a final note, any system utilizing AI for code-related tasks is now at risk of various forms of injection attacks, reinforcing the necessity for stringent security measures.