A study has exposed severe security vulnerabilities in commercial-grade Large Reasoning Models (LRMs) like OpenAI’s o1/o3 series and Google’s Gemini 2.0. The researchers introduced the Malicious-Educator benchmark and the Hijacking Chain-of-Thought (H-CoT) attack method, significantly reducing model refusal rates on harmful queries. They advocate for masking safety reasoning and implementing adversarial training to enhance defenses.

GitVenom Campaign Abusing Thousands of GitHub Repositories To Infect Users
The “GitVenom” malware campaign exploits GitHub’s ecosystem, distributing malicious code via fraudulent repositories targeting developers. Active since 2023, it uses social engineering to disguise malware