cognitive cybersecurity intelligence

News and Analysis

Search

ChatGPT jailbreak method uses virtual time travel to breach forbidden topics

AI researcher David Kuszmar discovered a vulnerability in OpenAI’s ChatGPT-4o model called “Time Bandit”. This vulnerability allows users to trick the model into discussing forbidden topics like malware creation and weapons, by convincing it that it’s communicating with someone from the past. OpenAI acknowledged the issue, stating they are working to make their models safer and more robust against such exploits.

Source: www.scmagazine.com –

Subscribe to newsletter

Subscribe to HEAL Security Dispatch for the latest healthcare cybersecurity news and analysis.

More Posts