A study has exposed severe security vulnerabilities in commercial-grade Large Reasoning Models (LRMs) like OpenAI’s o1/o3 series and Google’s Gemini 2.0. The researchers introduced the Malicious-Educator benchmark and the Hijacking Chain-of-Thought (H-CoT) attack method, significantly reducing model refusal rates on harmful queries. They advocate for masking safety reasoning and implementing adversarial training to enhance defenses.

Sheba Medical Center is using its data in new ways – and getting big wins
Sheba Beyond, Israel’s first virtual hospital, has adopted a self-service data platform to optimize patient data use. The innovative system permits clinicians to independently access