Cybersecurity researchers have found that large language models (LLMs) can generate new variants of malicious JavaScript code that evade detection. Although LLMs struggle to create new malware from scratch, they can be used to rewrite or obfuscate existing malware, making it harder to detect. While LLM providers have increased security measures to prevent misuse, bad actors have exploited LLMs to craft convincing phishing emails and novel malware. Such misuse can degrade the performance of malware classification systems by tricking them into recognising malicious code as benign.
Clop Ransomware is Now Blackmailing 66 Cleo Data-Theft Victims, Reports DataBreaches.Net
Right, let’s sit down for a chat about the state of play in cybersecurity. You know that old saying about ‘an Englishman’s home is his