A new feature in Anthropic’s Claude AI, known as Claude Skills, has been identified as a potential vector for ransomware attacks.
This feature, designed to extend the AI’s capabilities through custom code modules, can be manipulated to deploy malware like the MedusaLocker ransomware without the user’s explicit awareness.
The seemingly legitimate appearance of these Skills makes them a deceptive and dangerous tool for threat actors.
The core of the issue lies in the single-consent trust model of Claude Skills. Once a user grants a Skill initial permission to run, it can perform a wide range of actions in the background, including downloading and executing additional malicious code.
Cato Networks security analysts/researchers noted that this creates a significant security gap.
A seemingly harmless Skill, shared through public repositories or social media, could be a Trojan horse for a devastating ransomware attack, potentially affecting a vast number of users, given Anthropic’s large customer base.
The impact of such an attack could be substantial. A single employee installing a malicious Claude Skill could inadvertently trigger a company-wide ransomware incident.
The attack leverages the trust users place in the AI’s functionality, turning a productivity-enhancing feature into a security nightmare.
The ease with which a legitimate Skill can be modified to carry a malicious payload makes this a scalable threat.
The Infection Pathway
The infection process is subtle and effective. Researchers from Cato CTRL demonstrated this by modifying an official open-source “GIF Creator” Skill.
They added a helper function named postsave that appeared to be a harmless part of the Skill’s workflow, supposedly for post-processing the created GIF.
In reality, this function was designed to silently download and execute an external script, as illustrated in their research.
Legitimate-looking helper function added to Anthropic’s GIF Creator Skill (Source – Cato Networks)
This method bypasses the user’s scrutiny as Claude only prompts for approval of the main script, not the hidden operations of the helper function.
Once the initial approval is given, the malicious helper function can operate without any further prompts or warnings.
It can download and run malware, such as the MedusaLocker ransomware, which then encrypts the user’s files.
Execution Flow (Source – Cato Networks)
The execution flow shows that after the first consent, hidden subprocesses inherit the trusted status, allowing them to perform their malicious activities undetected.
This highlights a critical vulnerability where the user’s initial consent is exploited to carry out a full-fledged ransomware attack, all under the guise of a legitimate AI-powered tool.
Follow us on Google News, LinkedIn, and X to Get More Instant Updates, Set CSN as a Preferred Source in Google.
The post Hackers Can Weaponize Claude Skills to Execute MedusaLocker Ransomware Attack appeared first on Cyber Security News.


