AI model, DeepSeek R1, can be manipulated to produce malware, posing potential security risks, according to Tenable Research. Despite built-in safeguards, defensive mechanisms can be sidestepped using simple jailbreaking methods. Even though the AI-generated malware requires further refinement, it lowers entry barriers for individuals with basic coding skills. The findings underscore the need for continuous security improvements and responsible AI development to prevent misuse.

Why rooting and jailbreaking make you a target
Cybercriminals’ move to a mobile-first attack strategy has resulted in rooted and jailbroken mobile devices facing more mobile malware attacks and system compromises. Tools like