Researchers at Cato CTRL have found that large language models (LLMs) like DeepSeek and ChatGPT can be manipulated to create malicious code, raising cybersecurity concerns. A researcher with no prior coding experience was able to use these AI models to generate malware that steals personal data, highlighting the potential for unskilled individuals to misuse this technology. The discovery indicates that AI systems’ current safety measures may not be sufficient to deter malicious attacks.

Ontinue reports 132% surge in ransomware attacks, with AiTM and PlugX RAT increasing as tactics shift
The ‘2024 Threat Intelligence Report’ by Ontinue notes a 132% rise in ransomware attacks and an increasing use of Adversary-in-the-Middle (AiTM) attacks despite a 35%