Skip to content

Secure IT

Stay Secure. Stay Informed.

Primary Menu
  • Home
  • Sources
    • Krebs On Security
    • Security Week
    • The Hacker News
    • Schneier On Security
  • Home
  • The Hacker News
  • Cursor AI Code Editor Vulnerability Enables RCE via Malicious MCP File Swaps Post Approval
  • The Hacker News

Cursor AI Code Editor Vulnerability Enables RCE via Malicious MCP File Swaps Post Approval

[email protected] The Hacker News Published: August 5, 2025 | Updated: August 5, 2025 4 min read
0 views

Aug 05, 2025Ravie LakshmananAI Security / MCP Protocol

Cursor AI Code Editor Vulnerability

Cybersecurity researchers have disclosed a high-severity security flaw in the artificial intelligence (AI)-powered code editor Cursor that could result in remote code execution.

The vulnerability, tracked as CVE-2025-54136 (CVSS score: 7.2), has been codenamed MCPoison by Check Point Research, owing to the fact that it exploits a quirk in the way the software handles modifications to Model Context Protocol (MCP) server configurations.

“A vulnerability in Cursor AI allows an attacker to achieve remote and persistent code execution by modifying an already trusted MCP configuration file inside a shared GitHub repository or editing the file locally on the target’s machine,” Cursor said in an advisory released last week.

“Once a collaborator accepts a harmless MCP, the attacker can silently swap it for a malicious command (e.g., calc.exe) without triggering any warning or re-prompt.”

MCP is an open-standard developed by Anthropic that allows large language models (LLMs) to interact with external tools, data, and services in a standardized manner. It was introduced by the AI company in November 2024.

CVE-2025-54136, per Check Point, has to do with how it’s possible for an attacker to alter the behavior of an MCP configuration after a user has approved it within Cursor. Specifically, it unfolds as follows –

  • Add a benign-looking MCP configuration (“.cursor/rules/mcp.json”) to a shared repository
  • Wait for the victim to pull the code and approve it once in Cursor
  • Replace the MCP configuration with a malicious payload, e.g., launch a script or run a backdoor
  • Achieve persistent code execution every time the victim opens the Cursor

The fundamental problem here is that once a configuration is approved, it’s trusted by Cursor indefinitely for future runs, even if it has been changed. Successful exploitation of the vulnerability not only exposes organizations to supply chain risks, but also opens the door to data and intellectual property theft without their knowledge.

Following responsible disclosure on July 16, 2025, the issue has been addressed by Cursor in version 1.3 released late July 2025 by requiring user approval every time an entry in the MCP configuration file is modified.

Cybersecurity

“The flaw exposes a critical weakness in the trust model behind AI-assisted development environments, raising the stakes for teams integrating LLMs and automation into their workflows,” Check Point said.

The development comes days after Aim Labs, Backslash Security, and HiddenLayer exposed multiple weaknesses in the AI tool that could have been abused to obtain remote code execution and bypass its denylist-based protections. They have also been patched in version 1.3.

The findings also coincide with the growing adoption of AI in business workflows, including using LLMs for code generation, broadening the attack surface to various emerging risks like AI supply chain attacks, unsafe code, model poisoning, prompt injection, hallucinations, inappropriate responses, and data leakage –

  • A test of over 100 LLMs for their ability to write Java, Python, C#, and JavaScript code has found that 45% of the generated code samples failed security tests and introduced OWASP Top 10 security vulnerabilities. Java led with a 72% security failure rate, followed by C# (45%), JavaScript (43%), and Python (38%).
  • An attack called LegalPwn has revealed that it’s possible to leverage legal disclaimers, terms of service, or privacy policies as a novel prompt injection vector, highlighting how malicious instructions can be embedded within legitimate, but often overlooked, textual components to trigger unintended behavior in LLMs, such as misclassifying malicious code as safe and offering unsafe code suggestions that can execute a reverse shell on the developer’s system.
  • An attack called man-in-the-prompt that employs a rogue browser extension with no special permissions to open a new browser tab in the background, launch an AI chatbot, and inject them with malicious prompts to covertly extract data and compromise model integrity. This takes advantage of the fact that any browser add-on with scripting access to the Document Object Model (DOM) can read from, or write to, the AI prompt directly.
  • A jailbreak technique called Fallacy Failure that manipulates an LLM into accepting logically invalid premises and causes it to produce otherwise restricted outputs, thereby deceiving the model into breaking its own rules.
  • An attack called MAS hijacking that manipulates the control flow of a multi-agent system (MAS) to execute arbitrary malicious code across domains, mediums, and topologies by weaponizing the agentic nature of AI systems.
  • A technique called Poisoned GPT-Generated Unified Format (GGUF) Templates that targets the AI model inference pipeline by embedding malicious instructions within the chat template files that execute during the inference phase to compromise outputs. By positioning the attack between input validation and model output, the approach is both sneaky and bypasses AI guardrails. With GGUF files distributed via services like Hugging Face, the attack exploits the supply chain trust model to trigger the attack.
  • An attacker can target the machine learning (ML) training environments like MLFlow, Amazon SageMaker, and Azure ML to compromise the confidentiality, integrity and availability of the models, ultimately leading to lateral movement, privilege escalation, as well as training data and model theft and poisoning.
  • A study by Anthropic has uncovered that LLMs can learn hidden characteristics during distillation, a phenomenon called subliminal learning, that causes models to transmit behavioral traits through generated data that appears completely unrelated to those traits, potentially leading to misalignment and harmful behavior.
Identity Security Risk Assessment

“As Large Language Models become deeply embedded in agent workflows, enterprise copilots, and developer tools, the risk posed by these jailbreaks escalates significantly,” Pillar Security’s Dor Sarig said. “Modern jailbreaks can propagate through contextual chains, infecting one AI component and leading to cascading logic failures across interconnected systems.”

“These attacks highlight that AI security requires a new paradigm, as they bypass traditional safeguards without relying on architectural flaws or CVEs. The vulnerability lies in the very language and reasoning the model is designed to emulate.”

About The Author

[email protected] The Hacker News

See author's posts

Original post here

What do you feel about this?

  • The Hacker News

Post navigation

Previous: Misconfigurations Are Not Vulnerabilities: The Costly Confusion Behind Security Risks
Next: Google’s August Patch Fixes Two Qualcomm Vulnerabilities Exploited in the Wild

Author's Other Posts

$13.74M Hack Shuts Down Sanctioned Grinex Exchange After Intelligence Claims grinex.jpg

$13.74M Hack Shuts Down Sanctioned Grinex Exchange After Intelligence Claims

April 19, 2026 0 0
Mirai Variant Nexcorium Exploits CVE-2024-3721 to Hijack TBK DVRs for DDoS Botnet botnet-ddos.jpg

Mirai Variant Nexcorium Exploits CVE-2024-3721 to Hijack TBK DVRs for DDoS Botnet

April 19, 2026 0 0
Three Microsoft Defender Zero-Days Actively Exploited; Two Still Unpatched defender.jpg

Three Microsoft Defender Zero-Days Actively Exploited; Two Still Unpatched

April 19, 2026 0 0
Google Blocks 8.3B Policy-Violating Ads in 2025, Launches Android 17 Privacy Overhaul google-ads-android.jpg

Google Blocks 8.3B Policy-Violating Ads in 2025, Launches Android 17 Privacy Overhaul

April 19, 2026 0 0

Related Stories

grinex.jpg
  • The Hacker News

$13.74M Hack Shuts Down Sanctioned Grinex Exchange After Intelligence Claims

[email protected] The Hacker News April 19, 2026 0 0
botnet-ddos.jpg
  • The Hacker News

Mirai Variant Nexcorium Exploits CVE-2024-3721 to Hijack TBK DVRs for DDoS Botnet

[email protected] The Hacker News April 19, 2026 0 0
defender.jpg
  • The Hacker News

Three Microsoft Defender Zero-Days Actively Exploited; Two Still Unpatched

[email protected] The Hacker News April 19, 2026 0 0
google-ads-android.jpg
  • The Hacker News

Google Blocks 8.3B Policy-Violating Ads in 2025, Launches Android 17 Privacy Overhaul

[email protected] The Hacker News April 19, 2026 0 0
nist-cve.jpg
  • The Hacker News

NIST Limits CVE Enrichment After 263% Surge in Vulnerability Submissions

[email protected] The Hacker News April 17, 2026 0 1
europol.jpg
  • The Hacker News

Operation PowerOFF Seizes 53 DDoS Domains, Exposes 3 Million Criminal Accounts

[email protected] The Hacker News April 17, 2026 0 0

Trending Now

$13.74M Hack Shuts Down Sanctioned Grinex Exchange After Intelligence Claims grinex.jpg 1

$13.74M Hack Shuts Down Sanctioned Grinex Exchange After Intelligence Claims

April 19, 2026 0 0
Mirai Variant Nexcorium Exploits CVE-2024-3721 to Hijack TBK DVRs for DDoS Botnet botnet-ddos.jpg 2

Mirai Variant Nexcorium Exploits CVE-2024-3721 to Hijack TBK DVRs for DDoS Botnet

April 19, 2026 0 0
Three Microsoft Defender Zero-Days Actively Exploited; Two Still Unpatched defender.jpg 3

Three Microsoft Defender Zero-Days Actively Exploited; Two Still Unpatched

April 19, 2026 0 0
Google Blocks 8.3B Policy-Violating Ads in 2025, Launches Android 17 Privacy Overhaul google-ads-android.jpg 4

Google Blocks 8.3B Policy-Violating Ads in 2025, Launches Android 17 Privacy Overhaul

April 19, 2026 0 0

Connect with Us

Social menu is not set. You need to create menu and assign it to Social Menu on Menu Settings.

Trending News

$13.74M Hack Shuts Down Sanctioned Grinex Exchange After Intelligence Claims grinex.jpg 1
  • The Hacker News

$13.74M Hack Shuts Down Sanctioned Grinex Exchange After Intelligence Claims

April 19, 2026 0 0
Mirai Variant Nexcorium Exploits CVE-2024-3721 to Hijack TBK DVRs for DDoS Botnet botnet-ddos.jpg 2
  • The Hacker News

Mirai Variant Nexcorium Exploits CVE-2024-3721 to Hijack TBK DVRs for DDoS Botnet

April 19, 2026 0 0
Three Microsoft Defender Zero-Days Actively Exploited; Two Still Unpatched defender.jpg 3
  • The Hacker News

Three Microsoft Defender Zero-Days Actively Exploited; Two Still Unpatched

April 19, 2026 0 0
Google Blocks 8.3B Policy-Violating Ads in 2025, Launches Android 17 Privacy Overhaul google-ads-android.jpg 4
  • The Hacker News

Google Blocks 8.3B Policy-Violating Ads in 2025, Launches Android 17 Privacy Overhaul

April 19, 2026 0 0
NIST Limits CVE Enrichment After 263% Surge in Vulnerability Submissions nist-cve.jpg 5
  • The Hacker News

NIST Limits CVE Enrichment After 263% Surge in Vulnerability Submissions

April 17, 2026 0 1
Operation PowerOFF Seizes 53 DDoS Domains, Exposes 3 Million Criminal Accounts europol.jpg 6
  • The Hacker News

Operation PowerOFF Seizes 53 DDoS Domains, Exposes 3 Million Criminal Accounts

April 17, 2026 0 0
Apache ActiveMQ CVE-2026-34197 Added to CISA KEV Amid Active Exploitation apachemq.jpg 7
  • The Hacker News

Apache ActiveMQ CVE-2026-34197 Added to CISA KEV Amid Active Exploitation

April 17, 2026 0 0

You may have missed

grinex.jpg
  • The Hacker News

$13.74M Hack Shuts Down Sanctioned Grinex Exchange After Intelligence Claims

[email protected] The Hacker News April 19, 2026 0 0
botnet-ddos.jpg
  • The Hacker News

Mirai Variant Nexcorium Exploits CVE-2024-3721 to Hijack TBK DVRs for DDoS Botnet

[email protected] The Hacker News April 19, 2026 0 0
defender.jpg
  • The Hacker News

Three Microsoft Defender Zero-Days Actively Exploited; Two Still Unpatched

[email protected] The Hacker News April 19, 2026 0 0
google-ads-android.jpg
  • The Hacker News

Google Blocks 8.3B Policy-Violating Ads in 2025, Launches Android 17 Privacy Overhaul

[email protected] The Hacker News April 19, 2026 0 0
Copyright © 2026 All rights reserved. | MoreNews by AF themes.