cognitive cybersecurity intelligence

News and Analysis

Search

Not even fairy tales are safe – researchers weaponise bedtime stories to jailbreak AI chatbots and create malware

Security researchers have developed a technique to jailbreak AI chatbots, such as ChatGPT-4o and Microsoft Copilot, to create a ‘Chrome infostealer’ malware, even without prior coding knowledge. They designed a ‘narrative engineering’ approach, ‘Immersive World,’ which involves creating a fictitious scenario that normalises restricted operations. This technique levels the cybercrime field, reducing barriers for new entrants, hence potentially increasing threats.

Source: www.techradar.com –

Subscribe to newsletter

Subscribe to HEAL Security Dispatch for the latest healthcare cybersecurity news and analysis.

More Posts