cognitive cybersecurity intelligence

News and Analysis

Search

Using LLMs to Obfuscate Malicious JavaScript

Researchers have developed an adversarial machine learning algorithm to improve detection of malicious JavaScript code by using large language models to rewrite it. Unlike off-the-shelf obfuscation tools, which create obvious changes that can be detected, the tool creates changes that look natural and are harder to detect. Retraining deep learning-based detectors on adversarially generated samples improved their performance by 10%. The researchers warn that while detecting malware becomes challenging as it evolves, using similar tactics to rewrite such code can aid in improving machine learning models.

Source: unit42.paloaltonetworks.com –

Subscribe to newsletter

Subscribe to HEAL Security Dispatch for the latest healthcare cybersecurity news and analysis.

More Posts

Epic files to dismiss antitrust lawsuit

Epic Systems is seeking to dismiss Particle Health’s antitrust lawsuit, arguing it lacks merit and is retaliation for exposing privacy concerns regarding Particle’s handling of