cognitive cybersecurity intelligence

News and Analysis

Search

Threat Actors Use AI to Automate 0-Day Discovery and Exploitation at Machine Speed

Threat Actors Use AI to Automate 0-Day Discovery and Exploitation at Machine Speed

The way cyberattacks are launched has fundamentally changed. Threat actors are no longer spending months hunting for software flaws by hand.

With artificial intelligence in their toolkit, they can now discover and exploit zero-day vulnerabilities in minutes, placing organizations across every sector at serious risk.

For years, finding a zero-day required deep technical skill, long research cycles, and heavy resources.

Only well-funded nation-state groups or elite crews could do it consistently. That barrier no longer holds.

AI has made zero-day discovery faster, cheaper, and accessible to a wider range of attackers, including those without coding knowledge.

An attacker today gives an AI model a target, and the model independently scans the network, hunts for weaknesses, attempts exploits, and switches paths when one fails.

Through standards like the Model Context Protocol, AI agents connect to real environments and execute full attack chains with minimal human input.

Actor activities monitored at Cyberthint indicate that discovering zero-days is no longer a specialized task taking months, but has become a process that can be automated in minutes.

Cyberthint analysts and researchers identified this structural shift in late 2024, noting that AI is now operating not just as an assistant but as an active attacker. Tasks once requiring a ten-person red team for weeks now take just hours.

In February 2025, MITRE expanded its ATT&CK framework to cover AI-orchestrated operations, confirming that this threat category has matured into a serious industry-wide concern.

AI-Driven Espionage and the GAMECHANGE Campaign

The most striking case study in this space is GAMECHANGE, the first documented instance of AI-orchestrated espionage.

Identified in mid-September 2024 and assessed with high confidence as a Chinese state-backed operation, GAMECHANGE targeted roughly 70 global entities including technology companies, financial institutions, and government agencies, with four organizations successfully compromised.

The malware was written in Python, compiled into a Windows PE file using PyInstaller, and delivered from compromised email accounts impersonating Ukrainian ministry representatives.

GTG-1002’s AI-orchestrated espionage (Source – Cyberthint)

What set GAMECHANGE apart was that its instructions were not hardcoded into the binary. Instead, it sent queries to Alibaba’s Qwen-Coder model via the Hugging Face API, generating commands to execute in real time.

It embedded unique API tokens to resist blacklisting, collected hardware, process, network, and Active Directory data, and recursively copied Office documents and PDFs.

MITRE’s Black Hat analysis described GAMECHANGE as a pilot program testing LLM capabilities before broader deployment.

Fake Ukrainian ministry representatives (Source – Cyberthint)

Two other experimental AI-powered malware families were also documented. MalTerminal, the earliest known malware that generates malicious payloads at runtime, was presented by SentinelLABS at LABScon 2024.

When run, it offered a choice between ransomware or a reverse shell, sent requests to a GPT-4 endpoint, and generated encryption and exfiltration code in memory without writing to disk.

JSOUTFMUT, discovered by GTID in June 2024, was a VBScript dropper that received its mutations from an external LLM.

Its Thinking Robot module queried the Gemini Flash API for new obfuscation techniques, generating a fresh variant every hour and copying itself to removable drives and network shares.

Security teams must assume attackers now move at machine speed. Mean Time to Contain is more critical than Mean Time to Detect, since reactive strategies fail when attack speed outpaces patching.

LotL surveillance should shift to the network layer, as classic IOCs are quickly becoming outdated. Anomaly-based signals like unexpected SMB admin share usage and high-entropy DNS queries offer more persistent detection.

AI API traffic should be added to monitoring lists, and YARA-based API key scanning alongside inspecting binaries for embedded JSON prompt structures are among the most effective ways to catch LLM-embedded malware.

Placing artificial signals inside deception environments can also trigger false positives in attacker AI models.

Ultimately, it is not the speed of patching but the speed of containing the breach that will decide the outcome.

Follow us on Google News, LinkedIn, and X to Get More Instant Updates, Set CSN as a Preferred Source in Google.
The post Threat Actors Use AI to Automate 0-Day Discovery and Exploitation at Machine Speed appeared first on Cyber Security News.

Source: cybersecuritynews.com –

Subscribe to newsletter

Subscribe to HEAL Security Dispatch for the latest healthcare cybersecurity news and analysis.

More Posts