In 2025, threat actors turned widely used artificial intelligence tools into weapons for launching fast, precise network intrusions.
CrowdStrike’s 2026 Global Threat Report found an 89% year-over-year increase in attacks by AI-enabled adversaries, as criminals used automation and machine-generated scripts to cut the time between initial entry and full domain access to under 30 minutes.
The speed of intrusion became the most defining feature of 2025’s threat landscape.
The average eCrime breakout time — the interval between gaining initial access and moving laterally to other systems — fell to 29 minutes, a 65% speed increase over 2024.
The fastest recorded breakout took only 27 seconds. In one documented case, data exfiltration began within just four minutes of first access, leaving organizations almost no time to act.
CrowdStrike analysts noted that the methods behind this acceleration were deeply tied to AI abuse. Adversaries were not only building custom malware — they were injecting malicious prompts into legitimate AI tools running inside victim environments.
In August 2025, attackers embedded malicious JavaScript into Node Package Manager (npm) packages, hijacking victims’ local AI tools such as Claude and Gemini to steal authentication credentials and cryptocurrency assets.
CrowdStrike Services and OverWatch responded to more than 90 impacted customers.
A notable case involved CHATTY SPIDER, an eCrime adversary that targeted a U.S.-based law firm through voice phishing. The group convinced an employee to grant remote access via Microsoft Quick Assist.
Within four minutes, CHATTY SPIDER tried to send stolen files to attacker-controlled infrastructure using WinSCP.
CHATTY SPIDER starts to exfiltrate data in four minutes (Source – Crowdstrike)
When the firewall blocked it, the attacker pivoted to Google Drive. CrowdStrike OverWatch stopped the exfiltration before any data left the network.
Beyond individual operations, threat actors like FAMOUS CHOLLIMA built AI-assisted attack pipelines across multiple phases.
They used tools including ChatGPT, Gemini, GitHub Copilot, and VSCodium to create fake personas, manage multiple accounts, and perform technical job tasks while operating under fraudulent identities.
Their 2025 activity doubled compared to 2024, reflecting how AI lowered the effort required to run large-scale deceptive operations.
How Threat Actors Weaponize AI Across the Kill Chain
PUNK SPIDER, the most active ransomware adversary in 2025 with 198 documented intrusions, used Gemini-generated scripts to dump credentials from Veeam Backup & Replication databases and likely relied on DeepSeek-generated scripts to terminate services and destroy forensic evidence.
AI threats across the kill chain, 2024 vs. 2025 (Source – Crowdstrike)
Russia-nexus actor FANCY BEAR deployed LAMEHUG malware, which queried the Hugging Face LLM Qwen2.5-Coder-32B-Instruct through hardcoded prompts to perform reconnaissance and collect documents before exfiltration.
This replaced rigid code logic with AI-generated outputs, evading static security tools. Notably, 82% of all 2025 detections were malware-free, meaning most attacks moved through authorized pathways rather than traditional malicious software.
Organizations should monitor AI tool usage on endpoints, patch AI platforms promptly, audit npm dependencies, and maintain cross-domain visibility across identity, cloud, and SaaS environments to detect fast-moving intrusions before breakout.
Follow us on Google News, LinkedIn, and X to Get More Instant Updates, Set CSN as a Preferred Source in Google.
The post Threat Actors Weaponized AI Tools to Gain Full Domain Access within 30 Minutes appeared first on Cyber Security News.



