cognitive cybersecurity intelligence

News and Analysis

Search

Google Researchers’ Attack Prompts ChatGPT To Reveal Its Training Data

Researchers, mainly from Google’s DeepMind, have found that OpenAI’s large language models, including closed-source model ChatGPT, contain significant amounts of private identifiable information (PII). Using an innovative prompt, the team could extract verbatim data memorised from training, such as email addresses and phone numbers. Approximately 16.9% of generations tested included memorised PII. The method could, given adequate funds, potentially unearth gigabytes of training data.

Source: yro.slashdot.org –

Subscribe to newsletter

Subscribe to HEAL Security Dispatch for the latest healthcare cybersecurity news and analysis.

More Posts

How CDMOs Can Help with Regulatory Challenges

Contract Development and Manufacturing Organizations (CDMOs) provide essential support to pharmaceutical and biotech companies navigating complex regulations. They offer expertise in regulatory compliance, quality documentation,