As AI innovation rapidly progresses, experts are urging organisations to establish security standards and protocols by 2024 to mitigate risks. Large language models (LLMs) such as OpenAI’s GPT-4 and GPT-5, demonstrate potential for significant productivity but also pose security risks, including data leaks, misuse for malicious activity, and misleading outputs. Experts suggest combating these risks with AI-based security solutions, thorough understanding of AI capabilities, implementation of security policies pre-deployment, and a collective global effort to establish security standards for AI technology.

“PupkinStealer” A New .NET-Based Malware Steals Browser Credentials & Exfiltrate via Telegram
PupkinStealer is a C# malware that steals sensitive data, including browser credentials and desktop files, using Telegram for stealthy data exfiltration. Discovered in April 2025,