cognitive cybersecurity intelligence

News and Analysis

Search

New hack uses prompt injection to corrupt Gemini’s long-term memory

Developers have trained Google’s AI, Gemini, to resist manipulation of user’s long-term memories without clear user instruction. However, vulnerabilities were discovered where conditions could be added to instructions that trick Gemini into thinking it has explicit user permission, enabling the change of memory data. Google responded to these findings as low risk and impact, but the vulnerability, if ignored, could potentially allow the insertion of misinformation into user memory banks.

Source: arstechnica.com –

Subscribe to newsletter

Subscribe to HEAL Security Dispatch for the latest healthcare cybersecurity news and analysis.

More Posts